00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 319 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 2982 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.099 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.730 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.742 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.752 Checking out Revision 34845be7ae448993c10fd8929d8277dc075ec12e (FETCH_HEAD) 00:00:04.753 > git config core.sparsecheckout # timeout=10 00:00:04.763 > git read-tree -mu HEAD # timeout=10 00:00:04.778 > git checkout -f 34845be7ae448993c10fd8929d8277dc075ec12e # timeout=5 00:00:04.796 Commit message: "ansible/roles/custom_facts: Escape instances of "\"" 00:00:04.796 > git rev-list --no-walk 34845be7ae448993c10fd8929d8277dc075ec12e # timeout=10 00:00:04.875 [Pipeline] Start of Pipeline 00:00:04.888 [Pipeline] library 00:00:04.890 Loading library shm_lib@master 00:00:04.890 Library shm_lib@master is cached. Copying from home. 00:00:04.905 [Pipeline] node 00:00:19.907 Still waiting to schedule task 00:00:19.908 Waiting for next available executor on ‘vagrant-vm-host’ 00:08:16.058 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:08:16.060 [Pipeline] { 00:08:16.073 [Pipeline] catchError 00:08:16.074 [Pipeline] { 00:08:16.089 [Pipeline] wrap 00:08:16.100 [Pipeline] { 00:08:16.108 [Pipeline] stage 00:08:16.110 [Pipeline] { (Prologue) 00:08:16.130 [Pipeline] echo 00:08:16.131 Node: VM-host-SM16 00:08:16.138 [Pipeline] cleanWs 00:08:16.147 [WS-CLEANUP] Deleting project workspace... 00:08:16.147 [WS-CLEANUP] Deferred wipeout is used... 00:08:16.151 [WS-CLEANUP] done 00:08:16.306 [Pipeline] setCustomBuildProperty 00:08:16.384 [Pipeline] nodesByLabel 00:08:16.386 Found a total of 1 nodes with the 'sorcerer' label 00:08:16.395 [Pipeline] httpRequest 00:08:16.398 HttpMethod: GET 00:08:16.399 URL: http://10.211.164.101/packages/jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:08:16.426 Sending request to url: http://10.211.164.101/packages/jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:08:16.427 Response Code: HTTP/1.1 200 OK 00:08:16.428 Success: Status code 200 is in the accepted range: 200,404 00:08:16.428 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:08:16.544 [Pipeline] sh 00:08:16.821 + tar --no-same-owner -xf jbp_34845be7ae448993c10fd8929d8277dc075ec12e.tar.gz 00:08:16.841 [Pipeline] httpRequest 00:08:16.845 HttpMethod: GET 00:08:16.846 URL: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:08:16.847 Sending request to url: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:08:16.848 Response Code: HTTP/1.1 200 OK 00:08:16.849 Success: Status code 200 is in the accepted range: 200,404 00:08:16.850 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:08:19.066 [Pipeline] sh 00:08:19.343 + tar --no-same-owner -xf spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:08:22.700 [Pipeline] sh 00:08:22.977 + git -C spdk log --oneline -n5 00:08:22.977 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:08:22.977 5d5e4d333 nvmf/rpc: Fail listener add with different secure channel 00:08:22.977 54944c1d1 event: don't NOTICELOG when no RPC server started 00:08:22.977 460a2e391 lib/init: do not fail if missing RPC's subsystem in JSON config doesn't exist in app 00:08:22.977 5dc808124 init: add spdk_subsystem_exists() 00:08:22.999 [Pipeline] withCredentials 00:08:23.034 > git --version # timeout=10 00:08:23.049 > git --version # 'git version 2.39.2' 00:08:23.063 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:08:23.065 [Pipeline] { 00:08:23.075 [Pipeline] retry 00:08:23.077 [Pipeline] { 00:08:23.095 [Pipeline] sh 00:08:23.373 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:08:23.641 [Pipeline] } 00:08:23.658 [Pipeline] // retry 00:08:23.663 [Pipeline] } 00:08:23.677 [Pipeline] // withCredentials 00:08:23.690 [Pipeline] httpRequest 00:08:23.695 HttpMethod: GET 00:08:23.695 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:08:23.696 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:08:23.697 Response Code: HTTP/1.1 200 OK 00:08:23.698 Success: Status code 200 is in the accepted range: 200,404 00:08:23.698 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:08:24.926 [Pipeline] sh 00:08:25.201 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:08:27.121 [Pipeline] sh 00:08:27.398 + git -C dpdk log --oneline -n5 00:08:27.398 eeb0605f11 version: 23.11.0 00:08:27.398 238778122a doc: update release notes for 23.11 00:08:27.398 46aa6b3cfc doc: fix description of RSS features 00:08:27.398 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:08:27.398 7e421ae345 devtools: support skipping forbid rule check 00:08:27.416 [Pipeline] writeFile 00:08:27.433 [Pipeline] sh 00:08:27.709 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:08:27.720 [Pipeline] sh 00:08:27.999 + cat autorun-spdk.conf 00:08:27.999 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:27.999 SPDK_TEST_NVMF=1 00:08:27.999 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:27.999 SPDK_TEST_USDT=1 00:08:27.999 SPDK_RUN_UBSAN=1 00:08:27.999 SPDK_TEST_NVMF_MDNS=1 00:08:27.999 NET_TYPE=virt 00:08:27.999 SPDK_JSONRPC_GO_CLIENT=1 00:08:27.999 SPDK_TEST_NATIVE_DPDK=v23.11 00:08:27.999 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:08:27.999 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:28.005 RUN_NIGHTLY=1 00:08:28.008 [Pipeline] } 00:08:28.024 [Pipeline] // stage 00:08:28.038 [Pipeline] stage 00:08:28.040 [Pipeline] { (Run VM) 00:08:28.055 [Pipeline] sh 00:08:28.334 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:08:28.334 + echo 'Start stage prepare_nvme.sh' 00:08:28.334 Start stage prepare_nvme.sh 00:08:28.334 + [[ -n 6 ]] 00:08:28.335 + disk_prefix=ex6 00:08:28.335 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:08:28.335 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:08:28.335 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:08:28.335 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:28.335 ++ SPDK_TEST_NVMF=1 00:08:28.335 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:28.335 ++ SPDK_TEST_USDT=1 00:08:28.335 ++ SPDK_RUN_UBSAN=1 00:08:28.335 ++ SPDK_TEST_NVMF_MDNS=1 00:08:28.335 ++ NET_TYPE=virt 00:08:28.335 ++ SPDK_JSONRPC_GO_CLIENT=1 00:08:28.335 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:08:28.335 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:08:28.335 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:28.335 ++ RUN_NIGHTLY=1 00:08:28.335 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:08:28.335 + nvme_files=() 00:08:28.335 + declare -A nvme_files 00:08:28.335 + backend_dir=/var/lib/libvirt/images/backends 00:08:28.335 + nvme_files['nvme.img']=5G 00:08:28.335 + nvme_files['nvme-cmb.img']=5G 00:08:28.335 + nvme_files['nvme-multi0.img']=4G 00:08:28.335 + nvme_files['nvme-multi1.img']=4G 00:08:28.335 + nvme_files['nvme-multi2.img']=4G 00:08:28.335 + nvme_files['nvme-openstack.img']=8G 00:08:28.335 + nvme_files['nvme-zns.img']=5G 00:08:28.335 + (( SPDK_TEST_NVME_PMR == 1 )) 00:08:28.335 + (( SPDK_TEST_FTL == 1 )) 00:08:28.335 + (( SPDK_TEST_NVME_FDP == 1 )) 00:08:28.335 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:08:28.335 + for nvme in "${!nvme_files[@]}" 00:08:28.335 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:08:28.335 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:08:28.335 + for nvme in "${!nvme_files[@]}" 00:08:28.335 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:08:28.335 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:08:28.335 + for nvme in "${!nvme_files[@]}" 00:08:28.335 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:08:28.335 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:08:28.335 + for nvme in "${!nvme_files[@]}" 00:08:28.335 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:08:28.335 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:08:28.335 + for nvme in "${!nvme_files[@]}" 00:08:28.335 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:08:28.335 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:08:28.335 + for nvme in "${!nvme_files[@]}" 00:08:28.335 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:08:28.335 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:08:28.335 + for nvme in "${!nvme_files[@]}" 00:08:28.335 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:08:28.335 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:08:28.335 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:08:28.335 + echo 'End stage prepare_nvme.sh' 00:08:28.335 End stage prepare_nvme.sh 00:08:28.347 [Pipeline] sh 00:08:28.626 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:08:28.626 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora38 00:08:28.626 00:08:28.626 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:08:28.626 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:08:28.626 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:08:28.626 HELP=0 00:08:28.626 DRY_RUN=0 00:08:28.626 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:08:28.626 NVME_DISKS_TYPE=nvme,nvme, 00:08:28.626 NVME_AUTO_CREATE=0 00:08:28.626 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:08:28.626 NVME_CMB=,, 00:08:28.626 NVME_PMR=,, 00:08:28.626 NVME_ZNS=,, 00:08:28.626 NVME_MS=,, 00:08:28.626 NVME_FDP=,, 00:08:28.626 SPDK_VAGRANT_DISTRO=fedora38 00:08:28.626 SPDK_VAGRANT_VMCPU=10 00:08:28.626 SPDK_VAGRANT_VMRAM=12288 00:08:28.626 SPDK_VAGRANT_PROVIDER=libvirt 00:08:28.626 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:08:28.626 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:08:28.626 SPDK_OPENSTACK_NETWORK=0 00:08:28.626 VAGRANT_PACKAGE_BOX=0 00:08:28.626 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:08:28.626 FORCE_DISTRO=true 00:08:28.626 VAGRANT_BOX_VERSION= 00:08:28.626 EXTRA_VAGRANTFILES= 00:08:28.626 NIC_MODEL=e1000 00:08:28.626 00:08:28.626 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:08:28.626 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:08:31.910 Bringing machine 'default' up with 'libvirt' provider... 00:08:32.476 ==> default: Creating image (snapshot of base box volume). 00:08:32.735 ==> default: Creating domain with the following settings... 00:08:32.735 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713437760_e5434abc8fe737f93e6f 00:08:32.735 ==> default: -- Domain type: kvm 00:08:32.735 ==> default: -- Cpus: 10 00:08:32.735 ==> default: -- Feature: acpi 00:08:32.735 ==> default: -- Feature: apic 00:08:32.735 ==> default: -- Feature: pae 00:08:32.735 ==> default: -- Memory: 12288M 00:08:32.735 ==> default: -- Memory Backing: hugepages: 00:08:32.735 ==> default: -- Management MAC: 00:08:32.735 ==> default: -- Loader: 00:08:32.735 ==> default: -- Nvram: 00:08:32.735 ==> default: -- Base box: spdk/fedora38 00:08:32.735 ==> default: -- Storage pool: default 00:08:32.735 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713437760_e5434abc8fe737f93e6f.img (20G) 00:08:32.735 ==> default: -- Volume Cache: default 00:08:32.735 ==> default: -- Kernel: 00:08:32.735 ==> default: -- Initrd: 00:08:32.735 ==> default: -- Graphics Type: vnc 00:08:32.735 ==> default: -- Graphics Port: -1 00:08:32.735 ==> default: -- Graphics IP: 127.0.0.1 00:08:32.735 ==> default: -- Graphics Password: Not defined 00:08:32.735 ==> default: -- Video Type: cirrus 00:08:32.735 ==> default: -- Video VRAM: 9216 00:08:32.735 ==> default: -- Sound Type: 00:08:32.735 ==> default: -- Keymap: en-us 00:08:32.735 ==> default: -- TPM Path: 00:08:32.735 ==> default: -- INPUT: type=mouse, bus=ps2 00:08:32.735 ==> default: -- Command line args: 00:08:32.735 ==> default: -> value=-device, 00:08:32.735 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:08:32.735 ==> default: -> value=-drive, 00:08:32.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:08:32.735 ==> default: -> value=-device, 00:08:32.735 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:32.735 ==> default: -> value=-device, 00:08:32.735 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:08:32.735 ==> default: -> value=-drive, 00:08:32.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:08:32.735 ==> default: -> value=-device, 00:08:32.735 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:32.735 ==> default: -> value=-drive, 00:08:32.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:08:32.735 ==> default: -> value=-device, 00:08:32.735 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:32.735 ==> default: -> value=-drive, 00:08:32.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:08:32.735 ==> default: -> value=-device, 00:08:32.735 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:32.735 ==> default: Creating shared folders metadata... 00:08:32.735 ==> default: Starting domain. 00:08:34.637 ==> default: Waiting for domain to get an IP address... 00:08:56.561 ==> default: Waiting for SSH to become available... 00:08:56.561 ==> default: Configuring and enabling network interfaces... 00:08:59.846 default: SSH address: 192.168.121.167:22 00:08:59.846 default: SSH username: vagrant 00:08:59.846 default: SSH auth method: private key 00:09:02.386 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:09:08.986 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:09:15.543 ==> default: Mounting SSHFS shared folder... 00:09:16.481 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:09:16.481 ==> default: Checking Mount.. 00:09:17.880 ==> default: Folder Successfully Mounted! 00:09:17.880 ==> default: Running provisioner: file... 00:09:18.448 default: ~/.gitconfig => .gitconfig 00:09:19.017 00:09:19.017 SUCCESS! 00:09:19.017 00:09:19.017 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:09:19.017 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:09:19.017 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:09:19.017 00:09:19.026 [Pipeline] } 00:09:19.042 [Pipeline] // stage 00:09:19.050 [Pipeline] dir 00:09:19.050 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:09:19.051 [Pipeline] { 00:09:19.063 [Pipeline] catchError 00:09:19.065 [Pipeline] { 00:09:19.076 [Pipeline] sh 00:09:19.350 + vagrant ssh-config --host vagrant+ 00:09:19.350 sed -ne /^Host/,$p 00:09:19.350 + tee ssh_conf 00:09:23.561 Host vagrant 00:09:23.562 HostName 192.168.121.167 00:09:23.562 User vagrant 00:09:23.562 Port 22 00:09:23.562 UserKnownHostsFile /dev/null 00:09:23.562 StrictHostKeyChecking no 00:09:23.562 PasswordAuthentication no 00:09:23.562 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:09:23.562 IdentitiesOnly yes 00:09:23.562 LogLevel FATAL 00:09:23.562 ForwardAgent yes 00:09:23.562 ForwardX11 yes 00:09:23.562 00:09:23.576 [Pipeline] withEnv 00:09:23.578 [Pipeline] { 00:09:23.595 [Pipeline] sh 00:09:23.874 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:09:23.874 source /etc/os-release 00:09:23.874 [[ -e /image.version ]] && img=$(< /image.version) 00:09:23.874 # Minimal, systemd-like check. 00:09:23.874 if [[ -e /.dockerenv ]]; then 00:09:23.874 # Clear garbage from the node's name: 00:09:23.874 # agt-er_autotest_547-896 -> autotest_547-896 00:09:23.874 # $HOSTNAME is the actual container id 00:09:23.874 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:09:23.874 if mountpoint -q /etc/hostname; then 00:09:23.874 # We can assume this is a mount from a host where container is running, 00:09:23.874 # so fetch its hostname to easily identify the target swarm worker. 00:09:23.874 container="$(< /etc/hostname) ($agent)" 00:09:23.874 else 00:09:23.874 # Fallback 00:09:23.874 container=$agent 00:09:23.874 fi 00:09:23.874 fi 00:09:23.874 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:09:23.874 00:09:24.143 [Pipeline] } 00:09:24.163 [Pipeline] // withEnv 00:09:24.173 [Pipeline] setCustomBuildProperty 00:09:24.189 [Pipeline] stage 00:09:24.192 [Pipeline] { (Tests) 00:09:24.213 [Pipeline] sh 00:09:24.493 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:09:24.512 [Pipeline] timeout 00:09:24.513 Timeout set to expire in 40 min 00:09:24.515 [Pipeline] { 00:09:24.535 [Pipeline] sh 00:09:24.816 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:09:25.382 HEAD is now at 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:09:25.396 [Pipeline] sh 00:09:25.673 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:09:25.945 [Pipeline] sh 00:09:26.223 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:09:26.238 [Pipeline] sh 00:09:26.516 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:09:26.775 ++ readlink -f spdk_repo 00:09:26.775 + DIR_ROOT=/home/vagrant/spdk_repo 00:09:26.775 + [[ -n /home/vagrant/spdk_repo ]] 00:09:26.775 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:09:26.775 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:09:26.775 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:09:26.775 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:09:26.775 + [[ -d /home/vagrant/spdk_repo/output ]] 00:09:26.775 + cd /home/vagrant/spdk_repo 00:09:26.775 + source /etc/os-release 00:09:26.775 ++ NAME='Fedora Linux' 00:09:26.775 ++ VERSION='38 (Cloud Edition)' 00:09:26.775 ++ ID=fedora 00:09:26.775 ++ VERSION_ID=38 00:09:26.775 ++ VERSION_CODENAME= 00:09:26.775 ++ PLATFORM_ID=platform:f38 00:09:26.775 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:09:26.775 ++ ANSI_COLOR='0;38;2;60;110;180' 00:09:26.775 ++ LOGO=fedora-logo-icon 00:09:26.775 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:09:26.775 ++ HOME_URL=https://fedoraproject.org/ 00:09:26.775 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:09:26.775 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:09:26.775 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:09:26.775 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:09:26.775 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:09:26.775 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:09:26.775 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:09:26.775 ++ SUPPORT_END=2024-05-14 00:09:26.775 ++ VARIANT='Cloud Edition' 00:09:26.775 ++ VARIANT_ID=cloud 00:09:26.775 + uname -a 00:09:26.775 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:09:26.775 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:27.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.034 Hugepages 00:09:27.034 node hugesize free / total 00:09:27.034 node0 1048576kB 0 / 0 00:09:27.034 node0 2048kB 0 / 0 00:09:27.034 00:09:27.034 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:27.034 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:27.293 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:27.293 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:27.293 + rm -f /tmp/spdk-ld-path 00:09:27.293 + source autorun-spdk.conf 00:09:27.293 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:27.293 ++ SPDK_TEST_NVMF=1 00:09:27.293 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:27.293 ++ SPDK_TEST_USDT=1 00:09:27.293 ++ SPDK_RUN_UBSAN=1 00:09:27.293 ++ SPDK_TEST_NVMF_MDNS=1 00:09:27.293 ++ NET_TYPE=virt 00:09:27.293 ++ SPDK_JSONRPC_GO_CLIENT=1 00:09:27.293 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:09:27.293 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:09:27.293 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:27.293 ++ RUN_NIGHTLY=1 00:09:27.293 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:09:27.293 + [[ -n '' ]] 00:09:27.293 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:09:27.293 + for M in /var/spdk/build-*-manifest.txt 00:09:27.293 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:09:27.293 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:27.293 + for M in /var/spdk/build-*-manifest.txt 00:09:27.293 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:09:27.293 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:27.293 ++ uname 00:09:27.293 + [[ Linux == \L\i\n\u\x ]] 00:09:27.293 + sudo dmesg -T 00:09:27.293 + sudo dmesg --clear 00:09:27.293 + dmesg_pid=5999 00:09:27.293 + sudo dmesg -Tw 00:09:27.293 + [[ Fedora Linux == FreeBSD ]] 00:09:27.293 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:27.293 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:27.293 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:27.293 + [[ -x /usr/src/fio-static/fio ]] 00:09:27.293 + export FIO_BIN=/usr/src/fio-static/fio 00:09:27.293 + FIO_BIN=/usr/src/fio-static/fio 00:09:27.293 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:09:27.293 + [[ ! -v VFIO_QEMU_BIN ]] 00:09:27.293 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:09:27.293 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:27.293 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:27.293 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:09:27.293 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:27.293 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:27.293 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:27.293 Test configuration: 00:09:27.293 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:27.293 SPDK_TEST_NVMF=1 00:09:27.293 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:27.293 SPDK_TEST_USDT=1 00:09:27.293 SPDK_RUN_UBSAN=1 00:09:27.293 SPDK_TEST_NVMF_MDNS=1 00:09:27.293 NET_TYPE=virt 00:09:27.293 SPDK_JSONRPC_GO_CLIENT=1 00:09:27.293 SPDK_TEST_NATIVE_DPDK=v23.11 00:09:27.293 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:09:27.293 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:27.293 RUN_NIGHTLY=1 10:56:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.293 10:56:55 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:27.293 10:56:55 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.293 10:56:55 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.293 10:56:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.293 10:56:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.293 10:56:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.293 10:56:55 -- paths/export.sh@5 -- $ export PATH 00:09:27.293 10:56:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.293 10:56:55 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:09:27.293 10:56:55 -- common/autobuild_common.sh@435 -- $ date +%s 00:09:27.293 10:56:55 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713437815.XXXXXX 00:09:27.552 10:56:55 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713437815.w3FXGD 00:09:27.552 10:56:55 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:09:27.552 10:56:55 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:09:27.552 10:56:55 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:09:27.552 10:56:55 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:09:27.552 10:56:55 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:09:27.552 10:56:55 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:09:27.552 10:56:55 -- common/autobuild_common.sh@451 -- $ get_config_params 00:09:27.552 10:56:55 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:09:27.552 10:56:55 -- common/autotest_common.sh@10 -- $ set +x 00:09:27.552 10:56:55 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:09:27.552 10:56:55 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:09:27.552 10:56:55 -- pm/common@17 -- $ local monitor 00:09:27.552 10:56:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:27.552 10:56:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=6035 00:09:27.552 10:56:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:27.552 10:56:55 -- pm/common@21 -- $ date +%s 00:09:27.552 10:56:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=6037 00:09:27.552 10:56:55 -- pm/common@26 -- $ sleep 1 00:09:27.552 10:56:55 -- pm/common@21 -- $ date +%s 00:09:27.552 10:56:55 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713437815 00:09:27.552 10:56:55 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713437815 00:09:27.552 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713437815_collect-vmstat.pm.log 00:09:27.552 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713437815_collect-cpu-load.pm.log 00:09:28.488 10:56:56 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:09:28.488 10:56:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:09:28.488 10:56:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:09:28.488 10:56:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:28.488 10:56:56 -- spdk/autobuild.sh@16 -- $ date -u 00:09:28.488 Thu Apr 18 10:56:56 AM UTC 2024 00:09:28.488 10:56:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:09:28.488 v24.05-pre-407-g65b4e17c6 00:09:28.488 10:56:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:09:28.488 10:56:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:09:28.488 10:56:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:09:28.488 10:56:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:09:28.488 10:56:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:09:28.488 10:56:56 -- common/autotest_common.sh@10 -- $ set +x 00:09:28.488 ************************************ 00:09:28.488 START TEST ubsan 00:09:28.488 ************************************ 00:09:28.488 using ubsan 00:09:28.488 10:56:57 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:09:28.488 00:09:28.488 real 0m0.000s 00:09:28.488 user 0m0.000s 00:09:28.488 sys 0m0.000s 00:09:28.488 10:56:57 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:09:28.488 ************************************ 00:09:28.488 END TEST ubsan 00:09:28.488 ************************************ 00:09:28.488 10:56:57 -- common/autotest_common.sh@10 -- $ set +x 00:09:28.488 10:56:57 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:09:28.488 10:56:57 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:09:28.488 10:56:57 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:09:28.488 10:56:57 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:09:28.488 10:56:57 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:09:28.488 10:56:57 -- common/autotest_common.sh@10 -- $ set +x 00:09:28.745 ************************************ 00:09:28.745 START TEST build_native_dpdk 00:09:28.745 ************************************ 00:09:28.745 10:56:57 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:09:28.745 10:56:57 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:09:28.745 10:56:57 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:09:28.745 10:56:57 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:09:28.745 10:56:57 -- common/autobuild_common.sh@51 -- $ local compiler 00:09:28.745 10:56:57 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:09:28.745 10:56:57 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:09:28.745 10:56:57 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:09:28.745 10:56:57 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:09:28.745 10:56:57 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:09:28.745 10:56:57 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:09:28.745 10:56:57 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:09:28.745 10:56:57 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:09:28.745 10:56:57 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:09:28.745 10:56:57 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:09:28.746 10:56:57 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:09:28.746 10:56:57 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:09:28.746 10:56:57 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:09:28.746 10:56:57 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:09:28.746 10:56:57 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:09:28.746 eeb0605f11 version: 23.11.0 00:09:28.746 238778122a doc: update release notes for 23.11 00:09:28.746 46aa6b3cfc doc: fix description of RSS features 00:09:28.746 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:09:28.746 7e421ae345 devtools: support skipping forbid rule check 00:09:28.746 10:56:57 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:09:28.746 10:56:57 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:09:28.746 10:56:57 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:09:28.746 10:56:57 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:09:28.746 10:56:57 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:09:28.746 10:56:57 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:09:28.746 10:56:57 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:09:28.746 10:56:57 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:09:28.746 10:56:57 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:09:28.746 10:56:57 -- common/autobuild_common.sh@168 -- $ uname -s 00:09:28.746 10:56:57 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:09:28.746 10:56:57 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:09:28.746 10:56:57 -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:09:28.746 10:56:57 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:09:28.746 10:56:57 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:09:28.746 10:56:57 -- scripts/common.sh@333 -- $ IFS=.-: 00:09:28.746 10:56:57 -- scripts/common.sh@333 -- $ read -ra ver1 00:09:28.746 10:56:57 -- scripts/common.sh@334 -- $ IFS=.-: 00:09:28.746 10:56:57 -- scripts/common.sh@334 -- $ read -ra ver2 00:09:28.746 10:56:57 -- scripts/common.sh@335 -- $ local 'op=<' 00:09:28.746 10:56:57 -- scripts/common.sh@337 -- $ ver1_l=3 00:09:28.746 10:56:57 -- scripts/common.sh@338 -- $ ver2_l=3 00:09:28.746 10:56:57 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:09:28.746 10:56:57 -- scripts/common.sh@341 -- $ case "$op" in 00:09:28.746 10:56:57 -- scripts/common.sh@342 -- $ : 1 00:09:28.746 10:56:57 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:09:28.746 10:56:57 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.746 10:56:57 -- scripts/common.sh@362 -- $ decimal 23 00:09:28.746 10:56:57 -- scripts/common.sh@350 -- $ local d=23 00:09:28.746 10:56:57 -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:09:28.746 10:56:57 -- scripts/common.sh@352 -- $ echo 23 00:09:28.746 10:56:57 -- scripts/common.sh@362 -- $ ver1[v]=23 00:09:28.746 10:56:57 -- scripts/common.sh@363 -- $ decimal 21 00:09:28.746 10:56:57 -- scripts/common.sh@350 -- $ local d=21 00:09:28.746 10:56:57 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:09:28.746 10:56:57 -- scripts/common.sh@352 -- $ echo 21 00:09:28.746 10:56:57 -- scripts/common.sh@363 -- $ ver2[v]=21 00:09:28.746 10:56:57 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:09:28.746 10:56:57 -- scripts/common.sh@364 -- $ return 1 00:09:28.746 10:56:57 -- common/autobuild_common.sh@173 -- $ patch -p1 00:09:28.746 patching file config/rte_config.h 00:09:28.746 Hunk #1 succeeded at 60 (offset 1 line). 00:09:28.746 10:56:57 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:09:28.746 10:56:57 -- common/autobuild_common.sh@178 -- $ uname -s 00:09:28.746 10:56:57 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:09:28.746 10:56:57 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:09:28.746 10:56:57 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:09:34.011 The Meson build system 00:09:34.011 Version: 1.3.1 00:09:34.011 Source dir: /home/vagrant/spdk_repo/dpdk 00:09:34.011 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:09:34.011 Build type: native build 00:09:34.011 Program cat found: YES (/usr/bin/cat) 00:09:34.011 Project name: DPDK 00:09:34.011 Project version: 23.11.0 00:09:34.011 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:34.011 C linker for the host machine: gcc ld.bfd 2.39-16 00:09:34.011 Host machine cpu family: x86_64 00:09:34.011 Host machine cpu: x86_64 00:09:34.011 Message: ## Building in Developer Mode ## 00:09:34.011 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:34.011 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:09:34.011 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:09:34.011 Program python3 found: YES (/usr/bin/python3) 00:09:34.011 Program cat found: YES (/usr/bin/cat) 00:09:34.011 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:09:34.011 Compiler for C supports arguments -march=native: YES 00:09:34.011 Checking for size of "void *" : 8 00:09:34.011 Checking for size of "void *" : 8 (cached) 00:09:34.011 Library m found: YES 00:09:34.011 Library numa found: YES 00:09:34.011 Has header "numaif.h" : YES 00:09:34.011 Library fdt found: NO 00:09:34.011 Library execinfo found: NO 00:09:34.011 Has header "execinfo.h" : YES 00:09:34.011 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:34.011 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:34.011 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:34.011 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:34.012 Run-time dependency openssl found: YES 3.0.9 00:09:34.012 Run-time dependency libpcap found: YES 1.10.4 00:09:34.012 Has header "pcap.h" with dependency libpcap: YES 00:09:34.012 Compiler for C supports arguments -Wcast-qual: YES 00:09:34.012 Compiler for C supports arguments -Wdeprecated: YES 00:09:34.012 Compiler for C supports arguments -Wformat: YES 00:09:34.012 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:34.012 Compiler for C supports arguments -Wformat-security: NO 00:09:34.012 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:34.012 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:34.012 Compiler for C supports arguments -Wnested-externs: YES 00:09:34.012 Compiler for C supports arguments -Wold-style-definition: YES 00:09:34.012 Compiler for C supports arguments -Wpointer-arith: YES 00:09:34.012 Compiler for C supports arguments -Wsign-compare: YES 00:09:34.012 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:34.012 Compiler for C supports arguments -Wundef: YES 00:09:34.012 Compiler for C supports arguments -Wwrite-strings: YES 00:09:34.012 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:34.012 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:34.012 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:34.012 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:34.012 Program objdump found: YES (/usr/bin/objdump) 00:09:34.012 Compiler for C supports arguments -mavx512f: YES 00:09:34.012 Checking if "AVX512 checking" compiles: YES 00:09:34.012 Fetching value of define "__SSE4_2__" : 1 00:09:34.012 Fetching value of define "__AES__" : 1 00:09:34.012 Fetching value of define "__AVX__" : 1 00:09:34.012 Fetching value of define "__AVX2__" : 1 00:09:34.012 Fetching value of define "__AVX512BW__" : (undefined) 00:09:34.012 Fetching value of define "__AVX512CD__" : (undefined) 00:09:34.012 Fetching value of define "__AVX512DQ__" : (undefined) 00:09:34.012 Fetching value of define "__AVX512F__" : (undefined) 00:09:34.012 Fetching value of define "__AVX512VL__" : (undefined) 00:09:34.012 Fetching value of define "__PCLMUL__" : 1 00:09:34.012 Fetching value of define "__RDRND__" : 1 00:09:34.012 Fetching value of define "__RDSEED__" : 1 00:09:34.012 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:34.012 Fetching value of define "__znver1__" : (undefined) 00:09:34.012 Fetching value of define "__znver2__" : (undefined) 00:09:34.012 Fetching value of define "__znver3__" : (undefined) 00:09:34.012 Fetching value of define "__znver4__" : (undefined) 00:09:34.012 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:34.012 Message: lib/log: Defining dependency "log" 00:09:34.012 Message: lib/kvargs: Defining dependency "kvargs" 00:09:34.012 Message: lib/telemetry: Defining dependency "telemetry" 00:09:34.012 Checking for function "getentropy" : NO 00:09:34.012 Message: lib/eal: Defining dependency "eal" 00:09:34.012 Message: lib/ring: Defining dependency "ring" 00:09:34.012 Message: lib/rcu: Defining dependency "rcu" 00:09:34.012 Message: lib/mempool: Defining dependency "mempool" 00:09:34.012 Message: lib/mbuf: Defining dependency "mbuf" 00:09:34.012 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:34.012 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:34.012 Compiler for C supports arguments -mpclmul: YES 00:09:34.012 Compiler for C supports arguments -maes: YES 00:09:34.012 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:34.012 Compiler for C supports arguments -mavx512bw: YES 00:09:34.012 Compiler for C supports arguments -mavx512dq: YES 00:09:34.012 Compiler for C supports arguments -mavx512vl: YES 00:09:34.012 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:34.012 Compiler for C supports arguments -mavx2: YES 00:09:34.012 Compiler for C supports arguments -mavx: YES 00:09:34.012 Message: lib/net: Defining dependency "net" 00:09:34.012 Message: lib/meter: Defining dependency "meter" 00:09:34.012 Message: lib/ethdev: Defining dependency "ethdev" 00:09:34.012 Message: lib/pci: Defining dependency "pci" 00:09:34.012 Message: lib/cmdline: Defining dependency "cmdline" 00:09:34.012 Message: lib/metrics: Defining dependency "metrics" 00:09:34.012 Message: lib/hash: Defining dependency "hash" 00:09:34.012 Message: lib/timer: Defining dependency "timer" 00:09:34.012 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:34.012 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:09:34.012 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:09:34.012 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:09:34.012 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:09:34.012 Message: lib/acl: Defining dependency "acl" 00:09:34.012 Message: lib/bbdev: Defining dependency "bbdev" 00:09:34.012 Message: lib/bitratestats: Defining dependency "bitratestats" 00:09:34.012 Run-time dependency libelf found: YES 0.190 00:09:34.012 Message: lib/bpf: Defining dependency "bpf" 00:09:34.012 Message: lib/cfgfile: Defining dependency "cfgfile" 00:09:34.012 Message: lib/compressdev: Defining dependency "compressdev" 00:09:34.012 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:34.012 Message: lib/distributor: Defining dependency "distributor" 00:09:34.012 Message: lib/dmadev: Defining dependency "dmadev" 00:09:34.012 Message: lib/efd: Defining dependency "efd" 00:09:34.012 Message: lib/eventdev: Defining dependency "eventdev" 00:09:34.012 Message: lib/dispatcher: Defining dependency "dispatcher" 00:09:34.012 Message: lib/gpudev: Defining dependency "gpudev" 00:09:34.012 Message: lib/gro: Defining dependency "gro" 00:09:34.012 Message: lib/gso: Defining dependency "gso" 00:09:34.012 Message: lib/ip_frag: Defining dependency "ip_frag" 00:09:34.012 Message: lib/jobstats: Defining dependency "jobstats" 00:09:34.012 Message: lib/latencystats: Defining dependency "latencystats" 00:09:34.012 Message: lib/lpm: Defining dependency "lpm" 00:09:34.012 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:34.012 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:09:34.012 Fetching value of define "__AVX512IFMA__" : (undefined) 00:09:34.012 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:09:34.012 Message: lib/member: Defining dependency "member" 00:09:34.012 Message: lib/pcapng: Defining dependency "pcapng" 00:09:34.012 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:34.012 Message: lib/power: Defining dependency "power" 00:09:34.012 Message: lib/rawdev: Defining dependency "rawdev" 00:09:34.012 Message: lib/regexdev: Defining dependency "regexdev" 00:09:34.012 Message: lib/mldev: Defining dependency "mldev" 00:09:34.012 Message: lib/rib: Defining dependency "rib" 00:09:34.012 Message: lib/reorder: Defining dependency "reorder" 00:09:34.012 Message: lib/sched: Defining dependency "sched" 00:09:34.012 Message: lib/security: Defining dependency "security" 00:09:34.012 Message: lib/stack: Defining dependency "stack" 00:09:34.012 Has header "linux/userfaultfd.h" : YES 00:09:34.012 Has header "linux/vduse.h" : YES 00:09:34.012 Message: lib/vhost: Defining dependency "vhost" 00:09:34.012 Message: lib/ipsec: Defining dependency "ipsec" 00:09:34.012 Message: lib/pdcp: Defining dependency "pdcp" 00:09:34.012 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:34.012 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:09:34.012 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:09:34.012 Compiler for C supports arguments -mavx512bw: YES (cached) 00:09:34.012 Message: lib/fib: Defining dependency "fib" 00:09:34.012 Message: lib/port: Defining dependency "port" 00:09:34.012 Message: lib/pdump: Defining dependency "pdump" 00:09:34.012 Message: lib/table: Defining dependency "table" 00:09:34.012 Message: lib/pipeline: Defining dependency "pipeline" 00:09:34.012 Message: lib/graph: Defining dependency "graph" 00:09:34.012 Message: lib/node: Defining dependency "node" 00:09:34.012 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:35.385 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:35.385 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:35.385 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:35.385 Compiler for C supports arguments -Wno-sign-compare: YES 00:09:35.385 Compiler for C supports arguments -Wno-unused-value: YES 00:09:35.385 Compiler for C supports arguments -Wno-format: YES 00:09:35.385 Compiler for C supports arguments -Wno-format-security: YES 00:09:35.385 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:09:35.385 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:09:35.385 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:09:35.385 Compiler for C supports arguments -Wno-unused-parameter: YES 00:09:35.385 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:35.385 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:35.385 Compiler for C supports arguments -mavx512bw: YES (cached) 00:09:35.385 Compiler for C supports arguments -march=skylake-avx512: YES 00:09:35.385 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:09:35.385 Has header "sys/epoll.h" : YES 00:09:35.385 Program doxygen found: YES (/usr/bin/doxygen) 00:09:35.385 Configuring doxy-api-html.conf using configuration 00:09:35.385 Configuring doxy-api-man.conf using configuration 00:09:35.385 Program mandb found: YES (/usr/bin/mandb) 00:09:35.385 Program sphinx-build found: NO 00:09:35.385 Configuring rte_build_config.h using configuration 00:09:35.385 Message: 00:09:35.385 ================= 00:09:35.385 Applications Enabled 00:09:35.385 ================= 00:09:35.385 00:09:35.385 apps: 00:09:35.385 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:09:35.385 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:09:35.385 test-pmd, test-regex, test-sad, test-security-perf, 00:09:35.385 00:09:35.385 Message: 00:09:35.385 ================= 00:09:35.385 Libraries Enabled 00:09:35.385 ================= 00:09:35.385 00:09:35.385 libs: 00:09:35.385 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:35.385 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:09:35.385 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:09:35.385 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:09:35.385 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:09:35.385 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:09:35.385 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:09:35.385 00:09:35.385 00:09:35.385 Message: 00:09:35.385 =============== 00:09:35.385 Drivers Enabled 00:09:35.385 =============== 00:09:35.385 00:09:35.385 common: 00:09:35.385 00:09:35.385 bus: 00:09:35.385 pci, vdev, 00:09:35.385 mempool: 00:09:35.385 ring, 00:09:35.385 dma: 00:09:35.385 00:09:35.385 net: 00:09:35.385 i40e, 00:09:35.385 raw: 00:09:35.385 00:09:35.385 crypto: 00:09:35.385 00:09:35.385 compress: 00:09:35.385 00:09:35.385 regex: 00:09:35.385 00:09:35.385 ml: 00:09:35.385 00:09:35.385 vdpa: 00:09:35.385 00:09:35.385 event: 00:09:35.385 00:09:35.385 baseband: 00:09:35.385 00:09:35.385 gpu: 00:09:35.385 00:09:35.385 00:09:35.385 Message: 00:09:35.385 ================= 00:09:35.385 Content Skipped 00:09:35.385 ================= 00:09:35.385 00:09:35.385 apps: 00:09:35.385 00:09:35.385 libs: 00:09:35.385 00:09:35.385 drivers: 00:09:35.385 common/cpt: not in enabled drivers build config 00:09:35.385 common/dpaax: not in enabled drivers build config 00:09:35.385 common/iavf: not in enabled drivers build config 00:09:35.385 common/idpf: not in enabled drivers build config 00:09:35.385 common/mvep: not in enabled drivers build config 00:09:35.385 common/octeontx: not in enabled drivers build config 00:09:35.385 bus/auxiliary: not in enabled drivers build config 00:09:35.385 bus/cdx: not in enabled drivers build config 00:09:35.385 bus/dpaa: not in enabled drivers build config 00:09:35.385 bus/fslmc: not in enabled drivers build config 00:09:35.385 bus/ifpga: not in enabled drivers build config 00:09:35.385 bus/platform: not in enabled drivers build config 00:09:35.385 bus/vmbus: not in enabled drivers build config 00:09:35.385 common/cnxk: not in enabled drivers build config 00:09:35.385 common/mlx5: not in enabled drivers build config 00:09:35.385 common/nfp: not in enabled drivers build config 00:09:35.385 common/qat: not in enabled drivers build config 00:09:35.385 common/sfc_efx: not in enabled drivers build config 00:09:35.385 mempool/bucket: not in enabled drivers build config 00:09:35.385 mempool/cnxk: not in enabled drivers build config 00:09:35.385 mempool/dpaa: not in enabled drivers build config 00:09:35.385 mempool/dpaa2: not in enabled drivers build config 00:09:35.385 mempool/octeontx: not in enabled drivers build config 00:09:35.385 mempool/stack: not in enabled drivers build config 00:09:35.385 dma/cnxk: not in enabled drivers build config 00:09:35.385 dma/dpaa: not in enabled drivers build config 00:09:35.385 dma/dpaa2: not in enabled drivers build config 00:09:35.385 dma/hisilicon: not in enabled drivers build config 00:09:35.385 dma/idxd: not in enabled drivers build config 00:09:35.385 dma/ioat: not in enabled drivers build config 00:09:35.385 dma/skeleton: not in enabled drivers build config 00:09:35.385 net/af_packet: not in enabled drivers build config 00:09:35.385 net/af_xdp: not in enabled drivers build config 00:09:35.385 net/ark: not in enabled drivers build config 00:09:35.385 net/atlantic: not in enabled drivers build config 00:09:35.385 net/avp: not in enabled drivers build config 00:09:35.385 net/axgbe: not in enabled drivers build config 00:09:35.385 net/bnx2x: not in enabled drivers build config 00:09:35.385 net/bnxt: not in enabled drivers build config 00:09:35.385 net/bonding: not in enabled drivers build config 00:09:35.386 net/cnxk: not in enabled drivers build config 00:09:35.386 net/cpfl: not in enabled drivers build config 00:09:35.386 net/cxgbe: not in enabled drivers build config 00:09:35.386 net/dpaa: not in enabled drivers build config 00:09:35.386 net/dpaa2: not in enabled drivers build config 00:09:35.386 net/e1000: not in enabled drivers build config 00:09:35.386 net/ena: not in enabled drivers build config 00:09:35.386 net/enetc: not in enabled drivers build config 00:09:35.386 net/enetfec: not in enabled drivers build config 00:09:35.386 net/enic: not in enabled drivers build config 00:09:35.386 net/failsafe: not in enabled drivers build config 00:09:35.386 net/fm10k: not in enabled drivers build config 00:09:35.386 net/gve: not in enabled drivers build config 00:09:35.386 net/hinic: not in enabled drivers build config 00:09:35.386 net/hns3: not in enabled drivers build config 00:09:35.386 net/iavf: not in enabled drivers build config 00:09:35.386 net/ice: not in enabled drivers build config 00:09:35.386 net/idpf: not in enabled drivers build config 00:09:35.386 net/igc: not in enabled drivers build config 00:09:35.386 net/ionic: not in enabled drivers build config 00:09:35.386 net/ipn3ke: not in enabled drivers build config 00:09:35.386 net/ixgbe: not in enabled drivers build config 00:09:35.386 net/mana: not in enabled drivers build config 00:09:35.386 net/memif: not in enabled drivers build config 00:09:35.386 net/mlx4: not in enabled drivers build config 00:09:35.386 net/mlx5: not in enabled drivers build config 00:09:35.386 net/mvneta: not in enabled drivers build config 00:09:35.386 net/mvpp2: not in enabled drivers build config 00:09:35.386 net/netvsc: not in enabled drivers build config 00:09:35.386 net/nfb: not in enabled drivers build config 00:09:35.386 net/nfp: not in enabled drivers build config 00:09:35.386 net/ngbe: not in enabled drivers build config 00:09:35.386 net/null: not in enabled drivers build config 00:09:35.386 net/octeontx: not in enabled drivers build config 00:09:35.386 net/octeon_ep: not in enabled drivers build config 00:09:35.386 net/pcap: not in enabled drivers build config 00:09:35.386 net/pfe: not in enabled drivers build config 00:09:35.386 net/qede: not in enabled drivers build config 00:09:35.386 net/ring: not in enabled drivers build config 00:09:35.386 net/sfc: not in enabled drivers build config 00:09:35.386 net/softnic: not in enabled drivers build config 00:09:35.386 net/tap: not in enabled drivers build config 00:09:35.386 net/thunderx: not in enabled drivers build config 00:09:35.386 net/txgbe: not in enabled drivers build config 00:09:35.386 net/vdev_netvsc: not in enabled drivers build config 00:09:35.386 net/vhost: not in enabled drivers build config 00:09:35.386 net/virtio: not in enabled drivers build config 00:09:35.386 net/vmxnet3: not in enabled drivers build config 00:09:35.386 raw/cnxk_bphy: not in enabled drivers build config 00:09:35.386 raw/cnxk_gpio: not in enabled drivers build config 00:09:35.386 raw/dpaa2_cmdif: not in enabled drivers build config 00:09:35.386 raw/ifpga: not in enabled drivers build config 00:09:35.386 raw/ntb: not in enabled drivers build config 00:09:35.386 raw/skeleton: not in enabled drivers build config 00:09:35.386 crypto/armv8: not in enabled drivers build config 00:09:35.386 crypto/bcmfs: not in enabled drivers build config 00:09:35.386 crypto/caam_jr: not in enabled drivers build config 00:09:35.386 crypto/ccp: not in enabled drivers build config 00:09:35.386 crypto/cnxk: not in enabled drivers build config 00:09:35.386 crypto/dpaa_sec: not in enabled drivers build config 00:09:35.386 crypto/dpaa2_sec: not in enabled drivers build config 00:09:35.386 crypto/ipsec_mb: not in enabled drivers build config 00:09:35.386 crypto/mlx5: not in enabled drivers build config 00:09:35.386 crypto/mvsam: not in enabled drivers build config 00:09:35.386 crypto/nitrox: not in enabled drivers build config 00:09:35.386 crypto/null: not in enabled drivers build config 00:09:35.386 crypto/octeontx: not in enabled drivers build config 00:09:35.386 crypto/openssl: not in enabled drivers build config 00:09:35.386 crypto/scheduler: not in enabled drivers build config 00:09:35.386 crypto/uadk: not in enabled drivers build config 00:09:35.386 crypto/virtio: not in enabled drivers build config 00:09:35.386 compress/isal: not in enabled drivers build config 00:09:35.386 compress/mlx5: not in enabled drivers build config 00:09:35.386 compress/octeontx: not in enabled drivers build config 00:09:35.386 compress/zlib: not in enabled drivers build config 00:09:35.386 regex/mlx5: not in enabled drivers build config 00:09:35.386 regex/cn9k: not in enabled drivers build config 00:09:35.386 ml/cnxk: not in enabled drivers build config 00:09:35.386 vdpa/ifc: not in enabled drivers build config 00:09:35.386 vdpa/mlx5: not in enabled drivers build config 00:09:35.386 vdpa/nfp: not in enabled drivers build config 00:09:35.386 vdpa/sfc: not in enabled drivers build config 00:09:35.386 event/cnxk: not in enabled drivers build config 00:09:35.386 event/dlb2: not in enabled drivers build config 00:09:35.386 event/dpaa: not in enabled drivers build config 00:09:35.386 event/dpaa2: not in enabled drivers build config 00:09:35.386 event/dsw: not in enabled drivers build config 00:09:35.386 event/opdl: not in enabled drivers build config 00:09:35.386 event/skeleton: not in enabled drivers build config 00:09:35.386 event/sw: not in enabled drivers build config 00:09:35.386 event/octeontx: not in enabled drivers build config 00:09:35.386 baseband/acc: not in enabled drivers build config 00:09:35.386 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:09:35.386 baseband/fpga_lte_fec: not in enabled drivers build config 00:09:35.386 baseband/la12xx: not in enabled drivers build config 00:09:35.386 baseband/null: not in enabled drivers build config 00:09:35.386 baseband/turbo_sw: not in enabled drivers build config 00:09:35.386 gpu/cuda: not in enabled drivers build config 00:09:35.386 00:09:35.386 00:09:35.386 Build targets in project: 220 00:09:35.386 00:09:35.386 DPDK 23.11.0 00:09:35.386 00:09:35.386 User defined options 00:09:35.386 libdir : lib 00:09:35.386 prefix : /home/vagrant/spdk_repo/dpdk/build 00:09:35.386 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:09:35.386 c_link_args : 00:09:35.386 enable_docs : false 00:09:35.386 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:09:35.386 enable_kmods : false 00:09:35.386 machine : native 00:09:35.386 tests : false 00:09:35.386 00:09:35.386 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:35.386 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:09:35.645 10:57:04 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:09:35.645 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:09:35.645 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:35.645 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:35.645 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:35.645 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:35.903 [5/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:35.903 [6/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:35.903 [7/710] Linking static target lib/librte_kvargs.a 00:09:35.903 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:35.903 [9/710] Linking static target lib/librte_log.a 00:09:35.903 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:36.162 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:36.162 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:36.162 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:36.162 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:36.162 [15/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:36.420 [16/710] Linking target lib/librte_log.so.24.0 00:09:36.420 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:36.420 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:36.679 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:36.679 [20/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:09:36.679 [21/710] Linking target lib/librte_kvargs.so.24.0 00:09:36.679 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:36.679 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:36.679 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:36.937 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:09:36.937 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:36.937 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:36.937 [28/710] Linking static target lib/librte_telemetry.a 00:09:36.937 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:36.937 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:36.937 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:37.196 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:37.196 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:37.196 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:37.454 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:37.454 [36/710] Linking target lib/librte_telemetry.so.24.0 00:09:37.454 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:37.454 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:37.454 [39/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:09:37.454 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:37.454 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:37.454 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:37.454 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:37.712 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:37.712 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:37.970 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:37.970 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:37.970 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:38.228 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:38.228 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:38.229 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:38.229 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:38.229 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:38.229 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:38.487 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:38.487 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:38.487 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:38.487 [58/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:38.487 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:38.746 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:38.746 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:38.746 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:38.746 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:38.746 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:38.746 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:39.005 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:39.005 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:39.005 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:39.263 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:39.264 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:39.264 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:39.264 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:39.264 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:39.264 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:39.264 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:39.264 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:39.264 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:39.522 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:39.779 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:39.779 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:39.779 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:39.779 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:40.038 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:40.038 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:40.039 [85/710] Linking static target lib/librte_ring.a 00:09:40.298 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:40.298 [87/710] Linking static target lib/librte_eal.a 00:09:40.298 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:40.298 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:40.298 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.556 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:40.556 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:40.556 [93/710] Linking static target lib/librte_mempool.a 00:09:40.556 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:40.556 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:40.992 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:40.992 [97/710] Linking static target lib/librte_rcu.a 00:09:40.992 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:40.992 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:40.992 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.251 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:41.251 [102/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:41.251 [103/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:41.251 [104/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.251 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:41.251 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:41.510 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:41.510 [108/710] Linking static target lib/librte_mbuf.a 00:09:41.510 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:41.510 [110/710] Linking static target lib/librte_net.a 00:09:41.767 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:41.767 [112/710] Linking static target lib/librte_meter.a 00:09:41.767 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.767 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:42.025 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:42.025 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:42.025 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:42.025 [118/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:42.025 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:42.603 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:42.604 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:42.862 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:43.120 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:43.120 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:43.120 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:43.120 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:43.120 [127/710] Linking static target lib/librte_pci.a 00:09:43.120 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:43.379 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:43.379 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:43.379 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:43.379 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:43.379 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:43.379 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:43.637 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:43.637 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:43.637 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:43.637 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:43.637 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:43.637 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:43.637 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:43.895 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:43.895 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:44.153 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:44.153 [145/710] Linking static target lib/librte_cmdline.a 00:09:44.153 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:44.153 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:09:44.153 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:09:44.153 [149/710] Linking static target lib/librte_metrics.a 00:09:44.411 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:44.677 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:09:44.935 [152/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:44.935 [153/710] Linking static target lib/librte_timer.a 00:09:44.935 [154/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:44.935 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:45.193 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:45.451 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:09:45.709 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:09:45.709 [159/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:09:45.709 [160/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:09:46.275 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:46.275 [162/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:09:46.275 [163/710] Linking static target lib/librte_ethdev.a 00:09:46.533 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:09:46.533 [165/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:09:46.533 [166/710] Linking static target lib/librte_bitratestats.a 00:09:46.533 [167/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:46.533 [168/710] Linking static target lib/librte_hash.a 00:09:46.791 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:09:46.791 [170/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:09:46.791 [171/710] Linking static target lib/librte_bbdev.a 00:09:46.791 [172/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:46.791 [173/710] Linking target lib/librte_eal.so.24.0 00:09:46.791 [174/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:09:47.049 [175/710] Linking static target lib/acl/libavx2_tmp.a 00:09:47.049 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:09:47.049 [177/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:09:47.049 [178/710] Linking target lib/librte_ring.so.24.0 00:09:47.049 [179/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:09:47.308 [180/710] Linking target lib/librte_rcu.so.24.0 00:09:47.308 [181/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:47.308 [182/710] Linking target lib/librte_mempool.so.24.0 00:09:47.308 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:09:47.308 [184/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:09:47.308 [185/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:09:47.308 [186/710] Linking target lib/librte_meter.so.24.0 00:09:47.308 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:47.308 [188/710] Linking target lib/librte_timer.so.24.0 00:09:47.308 [189/710] Linking target lib/librte_pci.so.24.0 00:09:47.308 [190/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:09:47.308 [191/710] Linking target lib/librte_mbuf.so.24.0 00:09:47.567 [192/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:09:47.567 [193/710] Linking static target lib/acl/libavx512_tmp.a 00:09:47.567 [194/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:09:47.567 [195/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:09:47.567 [196/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:09:47.567 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:09:47.567 [198/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:09:47.567 [199/710] Linking target lib/librte_net.so.24.0 00:09:47.567 [200/710] Linking target lib/librte_bbdev.so.24.0 00:09:47.567 [201/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:09:47.825 [202/710] Linking static target lib/librte_acl.a 00:09:47.825 [203/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:09:47.825 [204/710] Linking target lib/librte_cmdline.so.24.0 00:09:47.825 [205/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:09:47.825 [206/710] Linking static target lib/librte_cfgfile.a 00:09:47.825 [207/710] Linking target lib/librte_hash.so.24.0 00:09:47.825 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:09:48.083 [209/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:09:48.084 [210/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:09:48.084 [211/710] Linking target lib/librte_acl.so.24.0 00:09:48.084 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:09:48.084 [213/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:09:48.341 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:09:48.342 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:09:48.342 [216/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:09:48.342 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:48.599 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:09:48.599 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:48.857 [220/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:48.857 [221/710] Linking static target lib/librte_compressdev.a 00:09:48.857 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:48.857 [223/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:09:48.857 [224/710] Linking static target lib/librte_bpf.a 00:09:49.115 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:49.115 [226/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:09:49.115 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:09:49.115 [228/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.373 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:09:49.373 [230/710] Linking static target lib/librte_distributor.a 00:09:49.373 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.373 [232/710] Linking target lib/librte_compressdev.so.24.0 00:09:49.373 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:49.631 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.631 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:49.631 [236/710] Linking static target lib/librte_dmadev.a 00:09:49.631 [237/710] Linking target lib/librte_distributor.so.24.0 00:09:49.889 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:09:49.890 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.890 [240/710] Linking target lib/librte_dmadev.so.24.0 00:09:50.147 [241/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:09:50.147 [242/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:09:50.406 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:09:50.720 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:09:50.720 [245/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:50.720 [246/710] Linking static target lib/librte_cryptodev.a 00:09:50.978 [247/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:09:50.978 [248/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:09:50.978 [249/710] Linking static target lib/librte_efd.a 00:09:51.236 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:09:51.236 [251/710] Linking target lib/librte_efd.so.24.0 00:09:51.236 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:51.237 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:09:51.237 [254/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:09:51.494 [255/710] Linking static target lib/librte_dispatcher.a 00:09:51.494 [256/710] Linking target lib/librte_ethdev.so.24.0 00:09:51.494 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:09:51.494 [258/710] Linking target lib/librte_metrics.so.24.0 00:09:51.494 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:09:51.494 [260/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:09:51.752 [261/710] Linking static target lib/librte_gpudev.a 00:09:51.752 [262/710] Linking target lib/librte_bpf.so.24.0 00:09:51.752 [263/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:09:51.752 [264/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:09:51.752 [265/710] Linking target lib/librte_bitratestats.so.24.0 00:09:51.752 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:09:51.752 [267/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.010 [268/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.010 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:09:52.010 [270/710] Linking target lib/librte_cryptodev.so.24.0 00:09:52.010 [271/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:09:52.269 [272/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:09:52.269 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:09:52.528 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.528 [275/710] Linking target lib/librte_gpudev.so.24.0 00:09:52.528 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:09:52.528 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:09:52.528 [278/710] Linking static target lib/librte_eventdev.a 00:09:52.528 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:09:52.787 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:09:52.787 [281/710] Linking static target lib/librte_gro.a 00:09:52.787 [282/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:09:52.787 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:09:52.787 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:09:52.787 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:09:52.787 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:09:53.045 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:09:53.045 [288/710] Linking target lib/librte_gro.so.24.0 00:09:53.303 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:09:53.303 [290/710] Linking static target lib/librte_gso.a 00:09:53.566 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:09:53.566 [292/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:09:53.566 [293/710] Linking static target lib/librte_jobstats.a 00:09:53.566 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:09:53.566 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:09:53.566 [296/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:09:53.566 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:09:53.566 [298/710] Linking target lib/librte_gso.so.24.0 00:09:53.566 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:09:53.566 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:09:53.834 [301/710] Linking static target lib/librte_ip_frag.a 00:09:53.834 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:09:53.834 [303/710] Linking static target lib/librte_latencystats.a 00:09:53.834 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:09:53.834 [305/710] Linking target lib/librte_jobstats.so.24.0 00:09:54.091 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:09:54.091 [307/710] Linking target lib/librte_ip_frag.so.24.0 00:09:54.091 [308/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:09:54.091 [309/710] Linking target lib/librte_latencystats.so.24.0 00:09:54.091 [310/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:09:54.091 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:09:54.091 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:09:54.349 [313/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:09:54.349 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:54.349 [315/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:09:54.349 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:54.606 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:54.864 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:54.864 [319/710] Linking target lib/librte_eventdev.so.24.0 00:09:54.864 [320/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:09:54.864 [321/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:09:54.864 [322/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:09:54.864 [323/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:54.864 [324/710] Linking static target lib/librte_lpm.a 00:09:54.864 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:09:55.122 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:55.122 [327/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:09:55.122 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:09:55.122 [329/710] Linking static target lib/librte_pcapng.a 00:09:55.122 [330/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:55.122 [331/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:55.379 [332/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:09:55.379 [333/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:09:55.379 [334/710] Linking target lib/librte_lpm.so.24.0 00:09:55.379 [335/710] Linking target lib/librte_pcapng.so.24.0 00:09:55.379 [336/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:09:55.379 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:09:55.637 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:55.637 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:55.895 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:55.895 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:09:55.895 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:55.895 [343/710] Linking static target lib/librte_power.a 00:09:55.895 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:09:55.895 [345/710] Linking static target lib/librte_rawdev.a 00:09:55.895 [346/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:09:55.895 [347/710] Linking static target lib/librte_member.a 00:09:56.153 [348/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:09:56.153 [349/710] Linking static target lib/librte_regexdev.a 00:09:56.153 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:09:56.153 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:09:56.412 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:09:56.412 [353/710] Linking target lib/librte_member.so.24.0 00:09:56.412 [354/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:56.412 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:09:56.412 [356/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:09:56.412 [357/710] Linking static target lib/librte_mldev.a 00:09:56.412 [358/710] Linking target lib/librte_rawdev.so.24.0 00:09:56.412 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:56.412 [360/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:09:56.672 [361/710] Linking target lib/librte_power.so.24.0 00:09:56.672 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:09:56.932 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:56.932 [364/710] Linking target lib/librte_regexdev.so.24.0 00:09:56.932 [365/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:56.932 [366/710] Linking static target lib/librte_reorder.a 00:09:56.932 [367/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:09:56.932 [368/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:09:56.932 [369/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:09:56.932 [370/710] Linking static target lib/librte_rib.a 00:09:57.190 [371/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:57.190 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:09:57.190 [373/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:57.190 [374/710] Linking target lib/librte_reorder.so.24.0 00:09:57.190 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:09:57.448 [376/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:09:57.448 [377/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:09:57.448 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:57.448 [379/710] Linking static target lib/librte_stack.a 00:09:57.448 [380/710] Linking static target lib/librte_security.a 00:09:57.448 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:09:57.448 [382/710] Linking target lib/librte_rib.so.24.0 00:09:57.706 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:09:57.706 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:57.706 [385/710] Linking target lib/librte_stack.so.24.0 00:09:57.706 [386/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:09:57.706 [387/710] Linking target lib/librte_mldev.so.24.0 00:09:57.706 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:57.965 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:57.965 [390/710] Linking target lib/librte_security.so.24.0 00:09:57.965 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:57.965 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:09:58.224 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:58.224 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:09:58.224 [395/710] Linking static target lib/librte_sched.a 00:09:58.482 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:58.741 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:09:58.741 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:58.741 [399/710] Linking target lib/librte_sched.so.24.0 00:09:58.741 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:09:58.999 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:58.999 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:09:59.258 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:09:59.258 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:59.516 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:09:59.516 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:09:59.799 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:09:59.799 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:09:59.799 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:09:59.799 [410/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:10:00.076 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:10:00.076 [412/710] Linking static target lib/librte_ipsec.a 00:10:00.076 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:10:00.335 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:10:00.335 [415/710] Linking target lib/librte_ipsec.so.24.0 00:10:00.335 [416/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:10:00.335 [417/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:10:00.335 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:10:00.335 [419/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:10:00.593 [420/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:10:00.593 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:10:00.593 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:10:00.593 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:10:01.527 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:10:01.527 [425/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:10:01.527 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:10:01.527 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:10:01.527 [428/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:10:01.527 [429/710] Linking static target lib/librte_pdcp.a 00:10:01.527 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:10:01.527 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:10:01.527 [432/710] Linking static target lib/librte_fib.a 00:10:01.785 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:10:01.785 [434/710] Linking target lib/librte_pdcp.so.24.0 00:10:02.044 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:10:02.044 [436/710] Linking target lib/librte_fib.so.24.0 00:10:02.044 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:10:02.610 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:10:02.610 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:10:02.610 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:10:02.610 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:10:02.611 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:10:02.869 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:10:02.869 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:10:03.127 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:10:03.127 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:10:03.127 [447/710] Linking static target lib/librte_port.a 00:10:03.385 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:10:03.385 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:10:03.385 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:10:03.645 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:10:03.645 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:10:03.645 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:10:03.645 [454/710] Linking target lib/librte_port.so.24.0 00:10:03.902 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:10:03.902 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:10:03.902 [457/710] Linking static target lib/librte_pdump.a 00:10:03.902 [458/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:10:03.902 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:10:03.902 [460/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:10:04.160 [461/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:10:04.160 [462/710] Linking target lib/librte_pdump.so.24.0 00:10:04.728 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:10:04.728 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:10:04.728 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:10:04.728 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:10:04.728 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:10:04.728 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:10:04.986 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:10:05.244 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:10:05.244 [471/710] Linking static target lib/librte_table.a 00:10:05.244 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:10:05.244 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:10:05.821 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:10:06.079 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.079 [476/710] Linking target lib/librte_table.so.24.0 00:10:06.079 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:10:06.079 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:10:06.079 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:10:06.079 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:10:06.336 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:10:06.594 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:10:06.852 [483/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:10:06.852 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:10:06.852 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:10:06.852 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:10:07.418 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:10:07.418 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:10:07.418 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:10:07.418 [490/710] Linking static target lib/librte_graph.a 00:10:07.676 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:10:07.676 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:10:07.676 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:10:07.937 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:10:08.195 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:10:08.195 [496/710] Linking target lib/librte_graph.so.24.0 00:10:08.195 [497/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:10:08.195 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:10:08.195 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:10:08.762 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:10:08.762 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:10:08.762 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:10:08.762 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:10:08.762 [504/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:10:09.021 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:10:09.021 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:10:09.280 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:10:09.281 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:10:09.539 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:10:09.539 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:10:09.539 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:10:09.797 [512/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:10:09.797 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:10:09.797 [514/710] Linking static target lib/librte_node.a 00:10:09.797 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:10:10.055 [516/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:10:10.055 [517/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:10:10.055 [518/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.055 [519/710] Linking target lib/librte_node.so.24.0 00:10:10.055 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:10:10.055 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:10:10.312 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:10:10.312 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:10.312 [524/710] Linking static target drivers/librte_bus_vdev.a 00:10:10.312 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:10:10.312 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:10.312 [527/710] Linking static target drivers/librte_bus_pci.a 00:10:10.570 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.570 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:10.570 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:10.570 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:10:10.570 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:10:10.900 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:10:10.900 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:10:10.900 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:10:10.900 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:10:10.900 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:10:10.900 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.900 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:10:11.181 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:10:11.181 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:11.181 [542/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:10:11.181 [543/710] Linking static target drivers/librte_mempool_ring.a 00:10:11.181 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:11.181 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:10:11.439 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:10:11.697 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:10:11.955 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:10:12.214 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:10:12.214 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:10:12.214 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:10:13.151 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:10:13.151 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:10:13.151 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:10:13.151 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:10:13.410 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:10:13.410 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:10:13.977 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:10:13.977 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:10:13.977 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:10:13.977 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:10:14.236 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:10:14.495 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:10:14.751 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:10:14.751 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:10:15.009 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:10:15.269 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:10:15.528 [568/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:10:15.528 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:10:15.528 [570/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:10:15.528 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:10:15.788 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:10:15.788 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:10:16.047 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:10:16.047 [575/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:10:16.047 [576/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:10:16.305 [577/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:10:16.305 [578/710] Linking static target lib/librte_vhost.a 00:10:16.305 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:10:16.305 [580/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:10:16.305 [581/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:10:16.305 [582/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:10:16.565 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:10:16.565 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:10:16.825 [585/710] Linking static target drivers/librte_net_i40e.a 00:10:16.825 [586/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:10:16.825 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:10:16.825 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:10:16.825 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:10:17.083 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:10:17.083 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:10:17.340 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:10:17.597 [593/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:10:17.598 [594/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:10:17.855 [595/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:10:17.855 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:10:17.855 [597/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:10:18.113 [598/710] Linking target lib/librte_vhost.so.24.0 00:10:18.113 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:10:18.678 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:10:18.935 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:10:18.935 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:10:18.935 [603/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:10:18.935 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:10:19.193 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:10:19.193 [606/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:10:19.193 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:10:19.772 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:10:20.029 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:10:20.286 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:10:20.286 [611/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:10:20.286 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:10:20.544 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:10:20.544 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:10:20.544 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:10:20.544 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:10:20.544 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:10:21.109 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:10:21.367 [619/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:10:21.367 [620/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:10:21.625 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:10:21.625 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:10:21.884 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:10:22.450 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:10:22.708 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:10:22.708 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:10:22.966 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:10:22.966 [628/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:10:22.966 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:10:22.966 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:10:22.966 [631/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:10:22.967 [632/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:10:23.225 [633/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:10:23.225 [634/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:10:23.225 [635/710] Linking static target lib/librte_pipeline.a 00:10:23.483 [636/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:10:23.483 [637/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:10:23.748 [638/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:10:23.748 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:10:23.748 [640/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:10:24.006 [641/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:10:24.006 [642/710] Linking target app/dpdk-dumpcap 00:10:24.006 [643/710] Linking target app/dpdk-graph 00:10:24.264 [644/710] Linking target app/dpdk-pdump 00:10:24.264 [645/710] Linking target app/dpdk-proc-info 00:10:24.264 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:10:24.264 [647/710] Linking target app/dpdk-test-acl 00:10:24.522 [648/710] Linking target app/dpdk-test-cmdline 00:10:24.522 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:10:24.522 [650/710] Linking target app/dpdk-test-compress-perf 00:10:24.522 [651/710] Linking target app/dpdk-test-crypto-perf 00:10:24.780 [652/710] Linking target app/dpdk-test-dma-perf 00:10:24.780 [653/710] Linking target app/dpdk-test-fib 00:10:24.780 [654/710] Linking target app/dpdk-test-flow-perf 00:10:25.038 [655/710] Linking target app/dpdk-test-gpudev 00:10:25.038 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:10:25.038 [657/710] Linking target app/dpdk-test-eventdev 00:10:25.296 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:10:25.296 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:10:25.296 [660/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:10:25.296 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:10:25.554 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:10:25.554 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:10:25.812 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:10:25.812 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:10:25.812 [666/710] Linking target app/dpdk-test-bbdev 00:10:26.069 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:10:26.069 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:10:26.069 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:10:26.326 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:10:26.326 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:10:26.326 [672/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:10:26.583 [673/710] Linking target lib/librte_pipeline.so.24.0 00:10:26.841 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:10:26.841 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:10:27.098 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:10:27.098 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:10:27.098 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:10:27.363 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:10:27.363 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:10:27.632 [681/710] Linking target app/dpdk-test-pipeline 00:10:27.632 [682/710] Linking target app/dpdk-test-mldev 00:10:27.632 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:10:28.197 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:10:28.197 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:10:28.197 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:10:28.197 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:10:28.197 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:10:28.763 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:10:28.763 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:10:29.021 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:10:29.021 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:10:29.021 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:10:29.279 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:10:29.846 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:10:29.846 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:10:30.104 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:10:30.104 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:10:30.104 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:10:30.105 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:10:30.363 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:10:30.363 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:10:30.363 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:10:30.621 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:10:30.621 [705/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:10:30.621 [706/710] Linking target app/dpdk-test-regex 00:10:30.621 [707/710] Linking target app/dpdk-test-sad 00:10:31.188 [708/710] Linking target app/dpdk-testpmd 00:10:31.188 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:10:31.754 [710/710] Linking target app/dpdk-test-security-perf 00:10:31.754 10:58:00 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:10:31.754 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:10:31.754 [0/1] Installing files. 00:10:32.014 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:10:32.014 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.016 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.017 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.277 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:10:32.278 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:10:32.279 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:10:32.279 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.279 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.847 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.847 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.847 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.847 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:10:32.847 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.847 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:10:32.847 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.847 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:10:32.847 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:10:32.847 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:10:32.847 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.847 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.848 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.849 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:10:32.850 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:10:32.850 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:10:32.850 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:10:32.850 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:10:32.850 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:10:32.850 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:10:32.850 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:10:32.850 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:10:32.850 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:10:32.850 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:10:32.850 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:10:32.850 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:10:32.850 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:10:32.850 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:10:32.850 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:10:32.850 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:10:32.850 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:10:32.850 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:10:32.850 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:10:32.850 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:10:32.850 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:10:32.850 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:10:32.850 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:10:32.850 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:10:32.850 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:10:32.850 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:10:32.850 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:10:32.850 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:10:32.850 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:10:32.850 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:10:32.850 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:10:32.850 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:10:32.850 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:10:32.850 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:10:32.850 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:10:32.850 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:10:32.850 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:10:32.850 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:10:32.850 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:10:32.850 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:10:32.850 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:10:32.850 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:10:32.850 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:10:32.850 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:10:32.850 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:10:32.850 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:10:32.850 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:10:32.850 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:10:32.850 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:10:32.850 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:10:32.850 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:10:32.850 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:10:32.850 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:10:32.850 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:10:32.850 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:10:32.850 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:10:32.850 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:10:32.850 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:10:32.850 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:10:32.850 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:10:32.850 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:10:32.850 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:10:32.850 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:10:32.850 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:10:32.850 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:10:32.850 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:10:32.850 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:10:32.850 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:10:32.850 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:10:32.850 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:10:32.850 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:10:32.850 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:10:32.850 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:10:32.850 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:10:32.850 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:10:32.850 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:10:32.850 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:10:32.850 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:10:32.850 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:10:32.850 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:10:32.850 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:10:32.850 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:10:32.850 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:10:32.850 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:10:32.850 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:10:32.850 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:10:32.850 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:10:32.850 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:10:32.850 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:10:32.850 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:10:32.850 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:10:32.850 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:10:32.850 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:10:32.851 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:10:32.851 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:10:32.851 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:10:32.851 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:10:32.851 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:10:32.851 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:10:32.851 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:10:32.851 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:10:32.851 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:10:32.851 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:10:32.851 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:10:32.851 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:10:32.851 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:10:32.851 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:10:32.851 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:10:32.851 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:10:32.851 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:10:32.851 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:10:32.851 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:10:32.851 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:10:32.851 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:10:32.851 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:10:32.851 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:10:32.851 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:10:32.851 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:10:32.851 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:10:32.851 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:10:32.851 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:10:32.851 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:10:32.851 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:10:32.851 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:10:32.851 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:10:32.851 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:10:32.851 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:10:32.851 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:10:32.851 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:10:32.851 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:10:32.851 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:10:32.851 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:10:32.851 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:10:32.851 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:10:32.851 10:58:01 -- common/autobuild_common.sh@189 -- $ uname -s 00:10:32.851 10:58:01 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:10:32.851 10:58:01 -- common/autobuild_common.sh@200 -- $ cat 00:10:32.851 10:58:01 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:10:32.851 00:10:32.851 real 1m4.175s 00:10:32.851 user 7m51.559s 00:10:32.851 sys 1m12.729s 00:10:32.851 10:58:01 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:32.851 10:58:01 -- common/autotest_common.sh@10 -- $ set +x 00:10:32.851 ************************************ 00:10:32.851 END TEST build_native_dpdk 00:10:32.851 ************************************ 00:10:32.851 10:58:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:10:32.851 10:58:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:10:32.851 10:58:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:10:32.851 10:58:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:10:32.851 10:58:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:10:32.851 10:58:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:10:32.851 10:58:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:10:32.851 10:58:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:10:33.109 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:10:33.109 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:10:33.109 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:10:33.109 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:33.673 Using 'verbs' RDMA provider 00:10:46.814 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:11:01.684 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:11:01.684 go version go1.21.1 linux/amd64 00:11:01.685 Creating mk/config.mk...done. 00:11:01.685 Creating mk/cc.flags.mk...done. 00:11:01.685 Type 'make' to build. 00:11:01.685 10:58:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:11:01.685 10:58:28 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:11:01.685 10:58:28 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:11:01.685 10:58:28 -- common/autotest_common.sh@10 -- $ set +x 00:11:01.685 ************************************ 00:11:01.685 START TEST make 00:11:01.685 ************************************ 00:11:01.685 10:58:28 -- common/autotest_common.sh@1111 -- $ make -j10 00:11:01.685 make[1]: Nothing to be done for 'all'. 00:11:28.225 CC lib/ut/ut.o 00:11:28.225 CC lib/log/log.o 00:11:28.225 CC lib/log/log_flags.o 00:11:28.225 CC lib/log/log_deprecated.o 00:11:28.225 CC lib/ut_mock/mock.o 00:11:28.225 LIB libspdk_ut_mock.a 00:11:28.225 LIB libspdk_ut.a 00:11:28.225 SO libspdk_ut_mock.so.6.0 00:11:28.225 LIB libspdk_log.a 00:11:28.225 SO libspdk_ut.so.2.0 00:11:28.225 SYMLINK libspdk_ut_mock.so 00:11:28.225 SO libspdk_log.so.7.0 00:11:28.225 SYMLINK libspdk_ut.so 00:11:28.225 SYMLINK libspdk_log.so 00:11:28.225 CXX lib/trace_parser/trace.o 00:11:28.225 CC lib/ioat/ioat.o 00:11:28.225 CC lib/util/base64.o 00:11:28.225 CC lib/dma/dma.o 00:11:28.225 CC lib/util/cpuset.o 00:11:28.225 CC lib/util/bit_array.o 00:11:28.225 CC lib/util/crc32.o 00:11:28.225 CC lib/util/crc16.o 00:11:28.225 CC lib/util/crc32c.o 00:11:28.225 CC lib/vfio_user/host/vfio_user_pci.o 00:11:28.225 CC lib/util/crc32_ieee.o 00:11:28.225 CC lib/vfio_user/host/vfio_user.o 00:11:28.225 CC lib/util/crc64.o 00:11:28.225 CC lib/util/dif.o 00:11:28.225 CC lib/util/fd.o 00:11:28.225 LIB libspdk_dma.a 00:11:28.225 SO libspdk_dma.so.4.0 00:11:28.225 CC lib/util/file.o 00:11:28.225 LIB libspdk_ioat.a 00:11:28.225 SO libspdk_ioat.so.7.0 00:11:28.225 CC lib/util/hexlify.o 00:11:28.225 SYMLINK libspdk_dma.so 00:11:28.225 CC lib/util/iov.o 00:11:28.225 SYMLINK libspdk_ioat.so 00:11:28.225 CC lib/util/math.o 00:11:28.225 CC lib/util/pipe.o 00:11:28.225 CC lib/util/strerror_tls.o 00:11:28.225 CC lib/util/string.o 00:11:28.225 CC lib/util/uuid.o 00:11:28.225 LIB libspdk_vfio_user.a 00:11:28.225 CC lib/util/fd_group.o 00:11:28.225 SO libspdk_vfio_user.so.5.0 00:11:28.225 CC lib/util/xor.o 00:11:28.225 CC lib/util/zipf.o 00:11:28.225 SYMLINK libspdk_vfio_user.so 00:11:28.225 LIB libspdk_util.a 00:11:28.225 SO libspdk_util.so.9.0 00:11:28.225 LIB libspdk_trace_parser.a 00:11:28.225 SO libspdk_trace_parser.so.5.0 00:11:28.225 SYMLINK libspdk_util.so 00:11:28.225 SYMLINK libspdk_trace_parser.so 00:11:28.225 CC lib/conf/conf.o 00:11:28.225 CC lib/rdma/common.o 00:11:28.225 CC lib/json/json_parse.o 00:11:28.225 CC lib/idxd/idxd.o 00:11:28.225 CC lib/rdma/rdma_verbs.o 00:11:28.225 CC lib/json/json_util.o 00:11:28.225 CC lib/json/json_write.o 00:11:28.225 CC lib/idxd/idxd_user.o 00:11:28.225 CC lib/vmd/vmd.o 00:11:28.225 CC lib/env_dpdk/env.o 00:11:28.225 CC lib/env_dpdk/memory.o 00:11:28.225 LIB libspdk_conf.a 00:11:28.225 CC lib/env_dpdk/pci.o 00:11:28.225 CC lib/env_dpdk/init.o 00:11:28.225 CC lib/env_dpdk/threads.o 00:11:28.225 SO libspdk_conf.so.6.0 00:11:28.225 LIB libspdk_rdma.a 00:11:28.225 LIB libspdk_json.a 00:11:28.225 SYMLINK libspdk_conf.so 00:11:28.225 CC lib/vmd/led.o 00:11:28.225 SO libspdk_rdma.so.6.0 00:11:28.225 SO libspdk_json.so.6.0 00:11:28.225 SYMLINK libspdk_rdma.so 00:11:28.225 CC lib/env_dpdk/pci_ioat.o 00:11:28.225 SYMLINK libspdk_json.so 00:11:28.225 CC lib/env_dpdk/pci_virtio.o 00:11:28.225 CC lib/env_dpdk/pci_vmd.o 00:11:28.225 CC lib/env_dpdk/pci_idxd.o 00:11:28.225 LIB libspdk_idxd.a 00:11:28.225 CC lib/env_dpdk/pci_event.o 00:11:28.225 SO libspdk_idxd.so.12.0 00:11:28.225 CC lib/env_dpdk/sigbus_handler.o 00:11:28.225 CC lib/env_dpdk/pci_dpdk.o 00:11:28.225 SYMLINK libspdk_idxd.so 00:11:28.225 CC lib/env_dpdk/pci_dpdk_2207.o 00:11:28.225 CC lib/env_dpdk/pci_dpdk_2211.o 00:11:28.225 LIB libspdk_vmd.a 00:11:28.225 SO libspdk_vmd.so.6.0 00:11:28.225 SYMLINK libspdk_vmd.so 00:11:28.225 CC lib/jsonrpc/jsonrpc_server.o 00:11:28.225 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:11:28.225 CC lib/jsonrpc/jsonrpc_client.o 00:11:28.225 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:11:28.225 LIB libspdk_jsonrpc.a 00:11:28.225 SO libspdk_jsonrpc.so.6.0 00:11:28.225 SYMLINK libspdk_jsonrpc.so 00:11:28.483 LIB libspdk_env_dpdk.a 00:11:28.483 CC lib/rpc/rpc.o 00:11:28.483 SO libspdk_env_dpdk.so.14.0 00:11:28.741 SYMLINK libspdk_env_dpdk.so 00:11:28.741 LIB libspdk_rpc.a 00:11:28.998 SO libspdk_rpc.so.6.0 00:11:28.998 SYMLINK libspdk_rpc.so 00:11:29.256 CC lib/notify/notify_rpc.o 00:11:29.256 CC lib/notify/notify.o 00:11:29.256 CC lib/keyring/keyring.o 00:11:29.256 CC lib/trace/trace.o 00:11:29.256 CC lib/keyring/keyring_rpc.o 00:11:29.256 CC lib/trace/trace_flags.o 00:11:29.256 CC lib/trace/trace_rpc.o 00:11:29.256 LIB libspdk_notify.a 00:11:29.515 SO libspdk_notify.so.6.0 00:11:29.515 LIB libspdk_trace.a 00:11:29.515 LIB libspdk_keyring.a 00:11:29.515 SO libspdk_keyring.so.1.0 00:11:29.515 SO libspdk_trace.so.10.0 00:11:29.515 SYMLINK libspdk_notify.so 00:11:29.515 SYMLINK libspdk_trace.so 00:11:29.515 SYMLINK libspdk_keyring.so 00:11:29.774 CC lib/sock/sock_rpc.o 00:11:29.774 CC lib/sock/sock.o 00:11:29.774 CC lib/thread/thread.o 00:11:29.774 CC lib/thread/iobuf.o 00:11:30.379 LIB libspdk_sock.a 00:11:30.379 SO libspdk_sock.so.9.0 00:11:30.379 SYMLINK libspdk_sock.so 00:11:30.636 CC lib/nvme/nvme_ctrlr_cmd.o 00:11:30.636 CC lib/nvme/nvme_ctrlr.o 00:11:30.636 CC lib/nvme/nvme_ns_cmd.o 00:11:30.636 CC lib/nvme/nvme_fabric.o 00:11:30.636 CC lib/nvme/nvme_ns.o 00:11:30.636 CC lib/nvme/nvme_pcie.o 00:11:30.636 CC lib/nvme/nvme_pcie_common.o 00:11:30.636 CC lib/nvme/nvme.o 00:11:30.636 CC lib/nvme/nvme_qpair.o 00:11:31.203 LIB libspdk_thread.a 00:11:31.461 SO libspdk_thread.so.10.0 00:11:31.461 CC lib/nvme/nvme_quirks.o 00:11:31.461 CC lib/nvme/nvme_transport.o 00:11:31.461 SYMLINK libspdk_thread.so 00:11:31.461 CC lib/nvme/nvme_discovery.o 00:11:31.461 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:11:31.461 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:11:31.461 CC lib/nvme/nvme_tcp.o 00:11:31.461 CC lib/nvme/nvme_opal.o 00:11:31.718 CC lib/nvme/nvme_io_msg.o 00:11:31.718 CC lib/nvme/nvme_poll_group.o 00:11:31.977 CC lib/nvme/nvme_zns.o 00:11:31.977 CC lib/nvme/nvme_stubs.o 00:11:31.977 CC lib/nvme/nvme_auth.o 00:11:32.235 CC lib/nvme/nvme_cuse.o 00:11:32.235 CC lib/nvme/nvme_rdma.o 00:11:32.235 CC lib/accel/accel.o 00:11:32.235 CC lib/accel/accel_rpc.o 00:11:32.493 CC lib/accel/accel_sw.o 00:11:32.751 CC lib/blob/blobstore.o 00:11:32.751 CC lib/blob/request.o 00:11:32.751 CC lib/blob/zeroes.o 00:11:32.751 CC lib/init/json_config.o 00:11:32.751 CC lib/virtio/virtio.o 00:11:33.008 CC lib/init/subsystem.o 00:11:33.008 CC lib/blob/blob_bs_dev.o 00:11:33.008 CC lib/virtio/virtio_vhost_user.o 00:11:33.008 CC lib/virtio/virtio_vfio_user.o 00:11:33.009 CC lib/init/subsystem_rpc.o 00:11:33.266 CC lib/virtio/virtio_pci.o 00:11:33.266 CC lib/init/rpc.o 00:11:33.266 LIB libspdk_accel.a 00:11:33.266 SO libspdk_accel.so.15.0 00:11:33.266 LIB libspdk_init.a 00:11:33.524 SO libspdk_init.so.5.0 00:11:33.524 SYMLINK libspdk_accel.so 00:11:33.524 LIB libspdk_virtio.a 00:11:33.524 SYMLINK libspdk_init.so 00:11:33.524 SO libspdk_virtio.so.7.0 00:11:33.524 LIB libspdk_nvme.a 00:11:33.524 SYMLINK libspdk_virtio.so 00:11:33.524 CC lib/bdev/bdev.o 00:11:33.524 CC lib/bdev/bdev_rpc.o 00:11:33.524 CC lib/bdev/bdev_zone.o 00:11:33.524 CC lib/bdev/part.o 00:11:33.524 CC lib/bdev/scsi_nvme.o 00:11:33.835 CC lib/event/app.o 00:11:33.835 CC lib/event/reactor.o 00:11:33.835 CC lib/event/log_rpc.o 00:11:33.835 SO libspdk_nvme.so.13.0 00:11:33.835 CC lib/event/app_rpc.o 00:11:33.835 CC lib/event/scheduler_static.o 00:11:34.095 SYMLINK libspdk_nvme.so 00:11:34.095 LIB libspdk_event.a 00:11:34.095 SO libspdk_event.so.13.0 00:11:34.354 SYMLINK libspdk_event.so 00:11:35.729 LIB libspdk_blob.a 00:11:35.729 SO libspdk_blob.so.11.0 00:11:35.729 SYMLINK libspdk_blob.so 00:11:35.987 CC lib/lvol/lvol.o 00:11:35.987 CC lib/blobfs/blobfs.o 00:11:35.987 CC lib/blobfs/tree.o 00:11:36.245 LIB libspdk_bdev.a 00:11:36.503 SO libspdk_bdev.so.15.0 00:11:36.503 SYMLINK libspdk_bdev.so 00:11:36.761 CC lib/ublk/ublk.o 00:11:36.761 CC lib/nbd/nbd.o 00:11:36.761 CC lib/ublk/ublk_rpc.o 00:11:36.761 CC lib/nbd/nbd_rpc.o 00:11:36.761 CC lib/scsi/dev.o 00:11:36.761 CC lib/scsi/lun.o 00:11:36.761 CC lib/ftl/ftl_core.o 00:11:36.761 CC lib/nvmf/ctrlr.o 00:11:37.019 LIB libspdk_blobfs.a 00:11:37.019 CC lib/ftl/ftl_init.o 00:11:37.019 CC lib/ftl/ftl_layout.o 00:11:37.019 SO libspdk_blobfs.so.10.0 00:11:37.019 SYMLINK libspdk_blobfs.so 00:11:37.019 CC lib/ftl/ftl_debug.o 00:11:37.019 CC lib/ftl/ftl_io.o 00:11:37.019 LIB libspdk_lvol.a 00:11:37.019 SO libspdk_lvol.so.10.0 00:11:37.280 SYMLINK libspdk_lvol.so 00:11:37.280 CC lib/ftl/ftl_sb.o 00:11:37.280 CC lib/nvmf/ctrlr_discovery.o 00:11:37.280 CC lib/nvmf/ctrlr_bdev.o 00:11:37.280 CC lib/scsi/port.o 00:11:37.280 LIB libspdk_nbd.a 00:11:37.280 CC lib/scsi/scsi.o 00:11:37.280 SO libspdk_nbd.so.7.0 00:11:37.280 CC lib/scsi/scsi_bdev.o 00:11:37.280 CC lib/scsi/scsi_pr.o 00:11:37.280 SYMLINK libspdk_nbd.so 00:11:37.280 CC lib/scsi/scsi_rpc.o 00:11:37.280 CC lib/ftl/ftl_l2p.o 00:11:37.280 CC lib/scsi/task.o 00:11:37.539 LIB libspdk_ublk.a 00:11:37.539 CC lib/nvmf/subsystem.o 00:11:37.539 SO libspdk_ublk.so.3.0 00:11:37.539 CC lib/nvmf/nvmf.o 00:11:37.539 SYMLINK libspdk_ublk.so 00:11:37.539 CC lib/nvmf/nvmf_rpc.o 00:11:37.539 CC lib/ftl/ftl_l2p_flat.o 00:11:37.539 CC lib/ftl/ftl_nv_cache.o 00:11:37.539 CC lib/ftl/ftl_band.o 00:11:37.539 CC lib/ftl/ftl_band_ops.o 00:11:37.799 CC lib/nvmf/transport.o 00:11:37.799 CC lib/nvmf/tcp.o 00:11:38.059 LIB libspdk_scsi.a 00:11:38.059 CC lib/nvmf/rdma.o 00:11:38.059 CC lib/ftl/ftl_writer.o 00:11:38.059 SO libspdk_scsi.so.9.0 00:11:38.318 SYMLINK libspdk_scsi.so 00:11:38.318 CC lib/ftl/ftl_rq.o 00:11:38.318 CC lib/iscsi/conn.o 00:11:38.318 CC lib/iscsi/init_grp.o 00:11:38.576 CC lib/ftl/ftl_reloc.o 00:11:38.576 CC lib/vhost/vhost.o 00:11:38.576 CC lib/vhost/vhost_rpc.o 00:11:38.576 CC lib/iscsi/iscsi.o 00:11:38.576 CC lib/vhost/vhost_scsi.o 00:11:38.576 CC lib/iscsi/md5.o 00:11:38.834 CC lib/ftl/ftl_l2p_cache.o 00:11:38.834 CC lib/vhost/vhost_blk.o 00:11:38.834 CC lib/iscsi/param.o 00:11:39.093 CC lib/iscsi/portal_grp.o 00:11:39.093 CC lib/iscsi/tgt_node.o 00:11:39.351 CC lib/iscsi/iscsi_subsystem.o 00:11:39.351 CC lib/ftl/ftl_p2l.o 00:11:39.351 CC lib/iscsi/iscsi_rpc.o 00:11:39.351 CC lib/vhost/rte_vhost_user.o 00:11:39.609 CC lib/iscsi/task.o 00:11:39.609 CC lib/ftl/mngt/ftl_mngt.o 00:11:39.609 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:11:39.609 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:11:39.609 CC lib/ftl/mngt/ftl_mngt_startup.o 00:11:39.609 CC lib/ftl/mngt/ftl_mngt_md.o 00:11:39.867 CC lib/ftl/mngt/ftl_mngt_misc.o 00:11:39.867 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:11:39.867 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:11:39.867 CC lib/ftl/mngt/ftl_mngt_band.o 00:11:39.867 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:11:39.867 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:11:40.126 LIB libspdk_nvmf.a 00:11:40.126 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:11:40.126 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:11:40.126 LIB libspdk_iscsi.a 00:11:40.126 CC lib/ftl/utils/ftl_conf.o 00:11:40.126 CC lib/ftl/utils/ftl_md.o 00:11:40.126 CC lib/ftl/utils/ftl_mempool.o 00:11:40.126 SO libspdk_nvmf.so.18.0 00:11:40.126 SO libspdk_iscsi.so.8.0 00:11:40.126 CC lib/ftl/utils/ftl_bitmap.o 00:11:40.384 CC lib/ftl/utils/ftl_property.o 00:11:40.384 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:11:40.384 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:11:40.384 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:11:40.384 SYMLINK libspdk_iscsi.so 00:11:40.384 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:11:40.384 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:11:40.384 SYMLINK libspdk_nvmf.so 00:11:40.384 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:11:40.384 CC lib/ftl/upgrade/ftl_sb_v3.o 00:11:40.641 LIB libspdk_vhost.a 00:11:40.641 CC lib/ftl/upgrade/ftl_sb_v5.o 00:11:40.641 CC lib/ftl/nvc/ftl_nvc_dev.o 00:11:40.641 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:11:40.641 CC lib/ftl/base/ftl_base_dev.o 00:11:40.641 CC lib/ftl/base/ftl_base_bdev.o 00:11:40.641 CC lib/ftl/ftl_trace.o 00:11:40.641 SO libspdk_vhost.so.8.0 00:11:40.641 SYMLINK libspdk_vhost.so 00:11:40.899 LIB libspdk_ftl.a 00:11:41.158 SO libspdk_ftl.so.9.0 00:11:41.417 SYMLINK libspdk_ftl.so 00:11:41.983 CC module/env_dpdk/env_dpdk_rpc.o 00:11:41.983 CC module/blob/bdev/blob_bdev.o 00:11:41.983 CC module/scheduler/dynamic/scheduler_dynamic.o 00:11:41.983 CC module/accel/iaa/accel_iaa.o 00:11:41.983 CC module/keyring/file/keyring.o 00:11:41.983 CC module/accel/dsa/accel_dsa.o 00:11:41.983 CC module/accel/error/accel_error.o 00:11:41.983 CC module/accel/ioat/accel_ioat.o 00:11:41.983 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:11:41.983 CC module/sock/posix/posix.o 00:11:41.983 LIB libspdk_env_dpdk_rpc.a 00:11:41.983 SO libspdk_env_dpdk_rpc.so.6.0 00:11:41.983 CC module/keyring/file/keyring_rpc.o 00:11:41.983 SYMLINK libspdk_env_dpdk_rpc.so 00:11:42.241 LIB libspdk_scheduler_dynamic.a 00:11:42.241 CC module/accel/iaa/accel_iaa_rpc.o 00:11:42.241 CC module/accel/error/accel_error_rpc.o 00:11:42.241 SO libspdk_scheduler_dynamic.so.4.0 00:11:42.241 LIB libspdk_scheduler_dpdk_governor.a 00:11:42.241 LIB libspdk_blob_bdev.a 00:11:42.241 CC module/accel/dsa/accel_dsa_rpc.o 00:11:42.241 SO libspdk_scheduler_dpdk_governor.so.4.0 00:11:42.241 SO libspdk_blob_bdev.so.11.0 00:11:42.241 LIB libspdk_keyring_file.a 00:11:42.241 CC module/accel/ioat/accel_ioat_rpc.o 00:11:42.241 SYMLINK libspdk_scheduler_dynamic.so 00:11:42.241 SO libspdk_keyring_file.so.1.0 00:11:42.241 SYMLINK libspdk_blob_bdev.so 00:11:42.241 SYMLINK libspdk_scheduler_dpdk_governor.so 00:11:42.241 LIB libspdk_accel_iaa.a 00:11:42.241 CC module/scheduler/gscheduler/gscheduler.o 00:11:42.241 LIB libspdk_accel_error.a 00:11:42.241 SO libspdk_accel_iaa.so.3.0 00:11:42.241 SYMLINK libspdk_keyring_file.so 00:11:42.241 SO libspdk_accel_error.so.2.0 00:11:42.241 LIB libspdk_accel_dsa.a 00:11:42.241 LIB libspdk_accel_ioat.a 00:11:42.499 SYMLINK libspdk_accel_iaa.so 00:11:42.499 SYMLINK libspdk_accel_error.so 00:11:42.499 SO libspdk_accel_dsa.so.5.0 00:11:42.499 SO libspdk_accel_ioat.so.6.0 00:11:42.499 LIB libspdk_scheduler_gscheduler.a 00:11:42.499 SYMLINK libspdk_accel_dsa.so 00:11:42.499 SO libspdk_scheduler_gscheduler.so.4.0 00:11:42.499 SYMLINK libspdk_accel_ioat.so 00:11:42.499 SYMLINK libspdk_scheduler_gscheduler.so 00:11:42.499 CC module/bdev/error/vbdev_error.o 00:11:42.499 CC module/blobfs/bdev/blobfs_bdev.o 00:11:42.499 CC module/bdev/gpt/gpt.o 00:11:42.499 CC module/bdev/delay/vbdev_delay.o 00:11:42.499 CC module/bdev/lvol/vbdev_lvol.o 00:11:42.499 CC module/bdev/malloc/bdev_malloc.o 00:11:42.756 CC module/bdev/null/bdev_null.o 00:11:42.756 LIB libspdk_sock_posix.a 00:11:42.756 CC module/bdev/nvme/bdev_nvme.o 00:11:42.756 CC module/bdev/passthru/vbdev_passthru.o 00:11:42.756 SO libspdk_sock_posix.so.6.0 00:11:42.756 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:11:42.756 CC module/bdev/gpt/vbdev_gpt.o 00:11:42.756 SYMLINK libspdk_sock_posix.so 00:11:42.756 CC module/bdev/nvme/bdev_nvme_rpc.o 00:11:42.756 CC module/bdev/error/vbdev_error_rpc.o 00:11:43.014 LIB libspdk_blobfs_bdev.a 00:11:43.014 CC module/bdev/delay/vbdev_delay_rpc.o 00:11:43.014 SO libspdk_blobfs_bdev.so.6.0 00:11:43.014 CC module/bdev/null/bdev_null_rpc.o 00:11:43.014 CC module/bdev/malloc/bdev_malloc_rpc.o 00:11:43.014 LIB libspdk_bdev_error.a 00:11:43.014 SYMLINK libspdk_blobfs_bdev.so 00:11:43.014 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:11:43.014 CC module/bdev/nvme/nvme_rpc.o 00:11:43.014 LIB libspdk_bdev_gpt.a 00:11:43.014 SO libspdk_bdev_error.so.6.0 00:11:43.014 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:11:43.014 SO libspdk_bdev_gpt.so.6.0 00:11:43.014 SYMLINK libspdk_bdev_error.so 00:11:43.014 LIB libspdk_bdev_delay.a 00:11:43.014 SYMLINK libspdk_bdev_gpt.so 00:11:43.014 SO libspdk_bdev_delay.so.6.0 00:11:43.272 LIB libspdk_bdev_malloc.a 00:11:43.272 LIB libspdk_bdev_null.a 00:11:43.272 SO libspdk_bdev_malloc.so.6.0 00:11:43.272 LIB libspdk_bdev_passthru.a 00:11:43.272 SYMLINK libspdk_bdev_delay.so 00:11:43.272 SO libspdk_bdev_null.so.6.0 00:11:43.272 SO libspdk_bdev_passthru.so.6.0 00:11:43.272 CC module/bdev/nvme/bdev_mdns_client.o 00:11:43.272 CC module/bdev/raid/bdev_raid.o 00:11:43.272 CC module/bdev/nvme/vbdev_opal.o 00:11:43.272 SYMLINK libspdk_bdev_malloc.so 00:11:43.272 SYMLINK libspdk_bdev_null.so 00:11:43.272 CC module/bdev/nvme/vbdev_opal_rpc.o 00:11:43.272 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:11:43.272 SYMLINK libspdk_bdev_passthru.so 00:11:43.272 CC module/bdev/split/vbdev_split.o 00:11:43.272 LIB libspdk_bdev_lvol.a 00:11:43.530 CC module/bdev/split/vbdev_split_rpc.o 00:11:43.530 SO libspdk_bdev_lvol.so.6.0 00:11:43.530 CC module/bdev/zone_block/vbdev_zone_block.o 00:11:43.530 SYMLINK libspdk_bdev_lvol.so 00:11:43.530 CC module/bdev/raid/bdev_raid_rpc.o 00:11:43.530 CC module/bdev/raid/bdev_raid_sb.o 00:11:43.530 LIB libspdk_bdev_split.a 00:11:43.530 CC module/bdev/raid/raid0.o 00:11:43.530 SO libspdk_bdev_split.so.6.0 00:11:43.788 CC module/bdev/aio/bdev_aio.o 00:11:43.788 CC module/bdev/ftl/bdev_ftl.o 00:11:43.788 CC module/bdev/iscsi/bdev_iscsi.o 00:11:43.788 SYMLINK libspdk_bdev_split.so 00:11:43.788 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:11:43.788 CC module/bdev/raid/raid1.o 00:11:43.788 CC module/bdev/raid/concat.o 00:11:43.788 CC module/bdev/aio/bdev_aio_rpc.o 00:11:43.788 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:11:44.046 CC module/bdev/ftl/bdev_ftl_rpc.o 00:11:44.046 LIB libspdk_bdev_iscsi.a 00:11:44.046 LIB libspdk_bdev_aio.a 00:11:44.046 LIB libspdk_bdev_zone_block.a 00:11:44.046 SO libspdk_bdev_aio.so.6.0 00:11:44.046 SO libspdk_bdev_iscsi.so.6.0 00:11:44.304 SO libspdk_bdev_zone_block.so.6.0 00:11:44.304 SYMLINK libspdk_bdev_aio.so 00:11:44.304 SYMLINK libspdk_bdev_iscsi.so 00:11:44.304 LIB libspdk_bdev_raid.a 00:11:44.304 SYMLINK libspdk_bdev_zone_block.so 00:11:44.304 CC module/bdev/virtio/bdev_virtio_scsi.o 00:11:44.304 CC module/bdev/virtio/bdev_virtio_rpc.o 00:11:44.304 CC module/bdev/virtio/bdev_virtio_blk.o 00:11:44.304 SO libspdk_bdev_raid.so.6.0 00:11:44.304 LIB libspdk_bdev_ftl.a 00:11:44.304 SO libspdk_bdev_ftl.so.6.0 00:11:44.304 SYMLINK libspdk_bdev_raid.so 00:11:44.304 SYMLINK libspdk_bdev_ftl.so 00:11:44.871 LIB libspdk_bdev_virtio.a 00:11:44.871 SO libspdk_bdev_virtio.so.6.0 00:11:44.871 SYMLINK libspdk_bdev_virtio.so 00:11:45.129 LIB libspdk_bdev_nvme.a 00:11:45.129 SO libspdk_bdev_nvme.so.7.0 00:11:45.129 SYMLINK libspdk_bdev_nvme.so 00:11:45.695 CC module/event/subsystems/sock/sock.o 00:11:45.695 CC module/event/subsystems/scheduler/scheduler.o 00:11:45.695 CC module/event/subsystems/vmd/vmd.o 00:11:45.695 CC module/event/subsystems/keyring/keyring.o 00:11:45.695 CC module/event/subsystems/vmd/vmd_rpc.o 00:11:45.695 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:11:45.695 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:11:45.695 CC module/event/subsystems/iobuf/iobuf.o 00:11:45.695 LIB libspdk_event_keyring.a 00:11:45.695 LIB libspdk_event_vmd.a 00:11:45.695 LIB libspdk_event_vhost_blk.a 00:11:45.953 LIB libspdk_event_scheduler.a 00:11:45.953 LIB libspdk_event_sock.a 00:11:45.953 SO libspdk_event_keyring.so.1.0 00:11:45.953 SO libspdk_event_vmd.so.6.0 00:11:45.953 SO libspdk_event_vhost_blk.so.3.0 00:11:45.953 SO libspdk_event_sock.so.5.0 00:11:45.953 SO libspdk_event_scheduler.so.4.0 00:11:45.953 LIB libspdk_event_iobuf.a 00:11:45.953 SYMLINK libspdk_event_vhost_blk.so 00:11:45.953 SO libspdk_event_iobuf.so.3.0 00:11:45.953 SYMLINK libspdk_event_keyring.so 00:11:45.953 SYMLINK libspdk_event_sock.so 00:11:45.953 SYMLINK libspdk_event_scheduler.so 00:11:45.953 SYMLINK libspdk_event_vmd.so 00:11:45.953 SYMLINK libspdk_event_iobuf.so 00:11:46.212 CC module/event/subsystems/accel/accel.o 00:11:46.470 LIB libspdk_event_accel.a 00:11:46.470 SO libspdk_event_accel.so.6.0 00:11:46.470 SYMLINK libspdk_event_accel.so 00:11:46.728 CC module/event/subsystems/bdev/bdev.o 00:11:46.986 LIB libspdk_event_bdev.a 00:11:46.986 SO libspdk_event_bdev.so.6.0 00:11:47.244 SYMLINK libspdk_event_bdev.so 00:11:47.244 CC module/event/subsystems/nbd/nbd.o 00:11:47.244 CC module/event/subsystems/scsi/scsi.o 00:11:47.244 CC module/event/subsystems/ublk/ublk.o 00:11:47.244 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:11:47.244 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:11:47.502 LIB libspdk_event_nbd.a 00:11:47.502 LIB libspdk_event_ublk.a 00:11:47.502 SO libspdk_event_nbd.so.6.0 00:11:47.502 SO libspdk_event_ublk.so.3.0 00:11:47.502 LIB libspdk_event_scsi.a 00:11:47.502 SO libspdk_event_scsi.so.6.0 00:11:47.502 LIB libspdk_event_nvmf.a 00:11:47.502 SYMLINK libspdk_event_nbd.so 00:11:47.502 SYMLINK libspdk_event_ublk.so 00:11:47.759 SO libspdk_event_nvmf.so.6.0 00:11:47.759 SYMLINK libspdk_event_scsi.so 00:11:47.759 SYMLINK libspdk_event_nvmf.so 00:11:47.759 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:11:48.017 CC module/event/subsystems/iscsi/iscsi.o 00:11:48.017 LIB libspdk_event_vhost_scsi.a 00:11:48.017 SO libspdk_event_vhost_scsi.so.3.0 00:11:48.017 LIB libspdk_event_iscsi.a 00:11:48.017 SO libspdk_event_iscsi.so.6.0 00:11:48.017 SYMLINK libspdk_event_vhost_scsi.so 00:11:48.274 SYMLINK libspdk_event_iscsi.so 00:11:48.274 SO libspdk.so.6.0 00:11:48.274 SYMLINK libspdk.so 00:11:48.532 CXX app/trace/trace.o 00:11:48.791 CC examples/nvme/hello_world/hello_world.o 00:11:48.791 CC examples/sock/hello_world/hello_sock.o 00:11:48.791 CC examples/ioat/perf/perf.o 00:11:48.791 CC examples/vmd/lsvmd/lsvmd.o 00:11:48.791 CC examples/accel/perf/accel_perf.o 00:11:48.791 CC examples/nvmf/nvmf/nvmf.o 00:11:48.791 CC examples/blob/hello_world/hello_blob.o 00:11:48.791 CC test/accel/dif/dif.o 00:11:48.791 CC examples/bdev/hello_world/hello_bdev.o 00:11:48.791 LINK lsvmd 00:11:49.048 LINK ioat_perf 00:11:49.048 LINK hello_world 00:11:49.048 LINK hello_sock 00:11:49.048 LINK nvmf 00:11:49.048 LINK hello_blob 00:11:49.048 LINK hello_bdev 00:11:49.048 CC examples/vmd/led/led.o 00:11:49.048 LINK accel_perf 00:11:49.306 CC examples/ioat/verify/verify.o 00:11:49.306 LINK spdk_trace 00:11:49.306 CC examples/bdev/bdevperf/bdevperf.o 00:11:49.306 CC examples/nvme/reconnect/reconnect.o 00:11:49.306 CC examples/nvme/nvme_manage/nvme_manage.o 00:11:49.306 LINK dif 00:11:49.306 LINK led 00:11:49.306 CC examples/blob/cli/blobcli.o 00:11:49.562 CC app/trace_record/trace_record.o 00:11:49.562 LINK verify 00:11:49.562 CC app/nvmf_tgt/nvmf_main.o 00:11:49.562 CC app/iscsi_tgt/iscsi_tgt.o 00:11:49.562 LINK reconnect 00:11:49.819 LINK nvmf_tgt 00:11:49.819 CC app/spdk_tgt/spdk_tgt.o 00:11:49.819 LINK spdk_trace_record 00:11:49.819 CC test/app/bdev_svc/bdev_svc.o 00:11:49.819 LINK iscsi_tgt 00:11:49.819 CC test/bdev/bdevio/bdevio.o 00:11:49.819 LINK nvme_manage 00:11:49.819 LINK blobcli 00:11:50.077 LINK spdk_tgt 00:11:50.077 CC app/spdk_lspci/spdk_lspci.o 00:11:50.077 CC app/spdk_nvme_perf/perf.o 00:11:50.077 LINK bdevperf 00:11:50.077 LINK bdev_svc 00:11:50.077 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:11:50.077 LINK spdk_lspci 00:11:50.077 CC examples/nvme/arbitration/arbitration.o 00:11:50.335 CC examples/nvme/hotplug/hotplug.o 00:11:50.335 CC examples/util/zipf/zipf.o 00:11:50.335 CC examples/nvme/cmb_copy/cmb_copy.o 00:11:50.335 LINK bdevio 00:11:50.335 CC examples/nvme/abort/abort.o 00:11:50.335 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:11:50.335 LINK zipf 00:11:50.592 LINK nvme_fuzz 00:11:50.592 LINK cmb_copy 00:11:50.592 LINK arbitration 00:11:50.592 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:11:50.592 LINK hotplug 00:11:50.592 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:11:50.592 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:11:50.850 LINK abort 00:11:50.850 CC app/spdk_nvme_identify/identify.o 00:11:50.850 LINK pmr_persistence 00:11:50.850 CC test/app/histogram_perf/histogram_perf.o 00:11:50.850 CC app/spdk_nvme_discover/discovery_aer.o 00:11:50.850 LINK spdk_nvme_perf 00:11:50.850 CC examples/thread/thread/thread_ex.o 00:11:51.108 CC test/blobfs/mkfs/mkfs.o 00:11:51.108 LINK histogram_perf 00:11:51.108 CC app/spdk_top/spdk_top.o 00:11:51.108 LINK vhost_fuzz 00:11:51.108 LINK spdk_nvme_discover 00:11:51.108 LINK thread 00:11:51.367 LINK mkfs 00:11:51.367 TEST_HEADER include/spdk/accel.h 00:11:51.367 TEST_HEADER include/spdk/accel_module.h 00:11:51.367 TEST_HEADER include/spdk/assert.h 00:11:51.367 TEST_HEADER include/spdk/barrier.h 00:11:51.367 TEST_HEADER include/spdk/base64.h 00:11:51.367 TEST_HEADER include/spdk/bdev.h 00:11:51.367 TEST_HEADER include/spdk/bdev_module.h 00:11:51.367 TEST_HEADER include/spdk/bdev_zone.h 00:11:51.367 CC app/vhost/vhost.o 00:11:51.367 TEST_HEADER include/spdk/bit_array.h 00:11:51.367 TEST_HEADER include/spdk/bit_pool.h 00:11:51.367 TEST_HEADER include/spdk/blob_bdev.h 00:11:51.367 TEST_HEADER include/spdk/blobfs_bdev.h 00:11:51.367 TEST_HEADER include/spdk/blobfs.h 00:11:51.367 TEST_HEADER include/spdk/blob.h 00:11:51.367 TEST_HEADER include/spdk/conf.h 00:11:51.367 TEST_HEADER include/spdk/config.h 00:11:51.367 TEST_HEADER include/spdk/cpuset.h 00:11:51.367 TEST_HEADER include/spdk/crc16.h 00:11:51.367 TEST_HEADER include/spdk/crc32.h 00:11:51.367 TEST_HEADER include/spdk/crc64.h 00:11:51.367 TEST_HEADER include/spdk/dif.h 00:11:51.367 TEST_HEADER include/spdk/dma.h 00:11:51.367 TEST_HEADER include/spdk/endian.h 00:11:51.367 TEST_HEADER include/spdk/env_dpdk.h 00:11:51.367 TEST_HEADER include/spdk/env.h 00:11:51.367 TEST_HEADER include/spdk/event.h 00:11:51.367 TEST_HEADER include/spdk/fd_group.h 00:11:51.367 TEST_HEADER include/spdk/fd.h 00:11:51.367 TEST_HEADER include/spdk/file.h 00:11:51.367 TEST_HEADER include/spdk/ftl.h 00:11:51.367 TEST_HEADER include/spdk/gpt_spec.h 00:11:51.367 TEST_HEADER include/spdk/hexlify.h 00:11:51.367 TEST_HEADER include/spdk/histogram_data.h 00:11:51.367 CC examples/idxd/perf/perf.o 00:11:51.367 TEST_HEADER include/spdk/idxd.h 00:11:51.367 TEST_HEADER include/spdk/idxd_spec.h 00:11:51.367 TEST_HEADER include/spdk/init.h 00:11:51.367 TEST_HEADER include/spdk/ioat.h 00:11:51.367 TEST_HEADER include/spdk/ioat_spec.h 00:11:51.367 TEST_HEADER include/spdk/iscsi_spec.h 00:11:51.367 TEST_HEADER include/spdk/json.h 00:11:51.367 TEST_HEADER include/spdk/jsonrpc.h 00:11:51.367 TEST_HEADER include/spdk/keyring.h 00:11:51.367 TEST_HEADER include/spdk/keyring_module.h 00:11:51.367 TEST_HEADER include/spdk/likely.h 00:11:51.367 TEST_HEADER include/spdk/log.h 00:11:51.367 TEST_HEADER include/spdk/lvol.h 00:11:51.367 CC test/app/jsoncat/jsoncat.o 00:11:51.367 TEST_HEADER include/spdk/memory.h 00:11:51.367 TEST_HEADER include/spdk/mmio.h 00:11:51.367 TEST_HEADER include/spdk/nbd.h 00:11:51.624 TEST_HEADER include/spdk/notify.h 00:11:51.624 TEST_HEADER include/spdk/nvme.h 00:11:51.624 TEST_HEADER include/spdk/nvme_intel.h 00:11:51.624 TEST_HEADER include/spdk/nvme_ocssd.h 00:11:51.624 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:11:51.624 TEST_HEADER include/spdk/nvme_spec.h 00:11:51.624 TEST_HEADER include/spdk/nvme_zns.h 00:11:51.624 TEST_HEADER include/spdk/nvmf_cmd.h 00:11:51.624 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:11:51.624 TEST_HEADER include/spdk/nvmf.h 00:11:51.624 TEST_HEADER include/spdk/nvmf_spec.h 00:11:51.624 TEST_HEADER include/spdk/nvmf_transport.h 00:11:51.624 TEST_HEADER include/spdk/opal.h 00:11:51.624 TEST_HEADER include/spdk/opal_spec.h 00:11:51.624 TEST_HEADER include/spdk/pci_ids.h 00:11:51.624 TEST_HEADER include/spdk/pipe.h 00:11:51.624 TEST_HEADER include/spdk/queue.h 00:11:51.624 TEST_HEADER include/spdk/reduce.h 00:11:51.624 TEST_HEADER include/spdk/rpc.h 00:11:51.624 TEST_HEADER include/spdk/scheduler.h 00:11:51.624 TEST_HEADER include/spdk/scsi.h 00:11:51.624 TEST_HEADER include/spdk/scsi_spec.h 00:11:51.624 TEST_HEADER include/spdk/sock.h 00:11:51.624 TEST_HEADER include/spdk/stdinc.h 00:11:51.624 TEST_HEADER include/spdk/string.h 00:11:51.624 CC test/app/stub/stub.o 00:11:51.624 TEST_HEADER include/spdk/thread.h 00:11:51.624 TEST_HEADER include/spdk/trace.h 00:11:51.624 TEST_HEADER include/spdk/trace_parser.h 00:11:51.624 TEST_HEADER include/spdk/tree.h 00:11:51.624 LINK vhost 00:11:51.624 TEST_HEADER include/spdk/ublk.h 00:11:51.624 TEST_HEADER include/spdk/util.h 00:11:51.624 TEST_HEADER include/spdk/uuid.h 00:11:51.624 TEST_HEADER include/spdk/version.h 00:11:51.624 TEST_HEADER include/spdk/vfio_user_pci.h 00:11:51.624 TEST_HEADER include/spdk/vfio_user_spec.h 00:11:51.624 TEST_HEADER include/spdk/vhost.h 00:11:51.624 TEST_HEADER include/spdk/vmd.h 00:11:51.624 TEST_HEADER include/spdk/xor.h 00:11:51.624 TEST_HEADER include/spdk/zipf.h 00:11:51.624 CXX test/cpp_headers/accel.o 00:11:51.624 LINK jsoncat 00:11:51.624 LINK spdk_nvme_identify 00:11:51.624 CC examples/interrupt_tgt/interrupt_tgt.o 00:11:51.882 LINK stub 00:11:51.882 CXX test/cpp_headers/accel_module.o 00:11:51.882 LINK idxd_perf 00:11:51.882 LINK interrupt_tgt 00:11:51.882 LINK spdk_top 00:11:51.882 CXX test/cpp_headers/assert.o 00:11:51.882 CC app/spdk_dd/spdk_dd.o 00:11:52.141 LINK iscsi_fuzz 00:11:52.141 CC test/dma/test_dma/test_dma.o 00:11:52.141 CC test/event/event_perf/event_perf.o 00:11:52.141 CC test/env/mem_callbacks/mem_callbacks.o 00:11:52.141 CC app/fio/nvme/fio_plugin.o 00:11:52.141 CXX test/cpp_headers/barrier.o 00:11:52.141 CC app/fio/bdev/fio_plugin.o 00:11:52.398 LINK event_perf 00:11:52.398 CXX test/cpp_headers/base64.o 00:11:52.398 CC test/lvol/esnap/esnap.o 00:11:52.398 LINK spdk_dd 00:11:52.398 CC test/nvme/aer/aer.o 00:11:52.398 LINK test_dma 00:11:52.398 CXX test/cpp_headers/bdev.o 00:11:52.655 CC test/event/reactor/reactor.o 00:11:52.655 CXX test/cpp_headers/bdev_module.o 00:11:52.655 CXX test/cpp_headers/bdev_zone.o 00:11:52.912 LINK spdk_nvme 00:11:52.912 LINK mem_callbacks 00:11:52.912 LINK reactor 00:11:52.912 LINK aer 00:11:52.912 CC test/env/vtophys/vtophys.o 00:11:52.912 CXX test/cpp_headers/bit_array.o 00:11:52.912 LINK spdk_bdev 00:11:52.912 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:11:52.912 CXX test/cpp_headers/bit_pool.o 00:11:53.169 LINK vtophys 00:11:53.169 CC test/env/memory/memory_ut.o 00:11:53.169 CC test/event/reactor_perf/reactor_perf.o 00:11:53.169 CC test/nvme/reset/reset.o 00:11:53.169 LINK env_dpdk_post_init 00:11:53.169 CC test/rpc_client/rpc_client_test.o 00:11:53.169 CXX test/cpp_headers/blob_bdev.o 00:11:53.169 LINK reactor_perf 00:11:53.427 CC test/thread/poller_perf/poller_perf.o 00:11:53.427 CC test/event/app_repeat/app_repeat.o 00:11:53.427 LINK rpc_client_test 00:11:53.427 CXX test/cpp_headers/blobfs_bdev.o 00:11:53.427 CC test/env/pci/pci_ut.o 00:11:53.427 LINK reset 00:11:53.427 LINK poller_perf 00:11:53.427 LINK app_repeat 00:11:53.685 CC test/nvme/sgl/sgl.o 00:11:53.686 CXX test/cpp_headers/blobfs.o 00:11:53.686 CXX test/cpp_headers/blob.o 00:11:53.686 CC test/nvme/e2edp/nvme_dp.o 00:11:53.686 CC test/event/scheduler/scheduler.o 00:11:53.944 CXX test/cpp_headers/conf.o 00:11:53.944 CC test/nvme/overhead/overhead.o 00:11:53.944 LINK pci_ut 00:11:53.944 LINK sgl 00:11:53.944 CC test/nvme/err_injection/err_injection.o 00:11:53.944 LINK nvme_dp 00:11:53.944 CXX test/cpp_headers/config.o 00:11:54.202 CXX test/cpp_headers/cpuset.o 00:11:54.202 LINK memory_ut 00:11:54.202 LINK scheduler 00:11:54.202 CXX test/cpp_headers/crc16.o 00:11:54.202 LINK err_injection 00:11:54.202 LINK overhead 00:11:54.202 CC test/nvme/startup/startup.o 00:11:54.202 CC test/nvme/reserve/reserve.o 00:11:54.459 CXX test/cpp_headers/crc32.o 00:11:54.459 CC test/nvme/simple_copy/simple_copy.o 00:11:54.459 CC test/nvme/connect_stress/connect_stress.o 00:11:54.459 CC test/nvme/boot_partition/boot_partition.o 00:11:54.459 CC test/nvme/compliance/nvme_compliance.o 00:11:54.459 LINK startup 00:11:54.459 LINK reserve 00:11:54.716 CC test/nvme/fused_ordering/fused_ordering.o 00:11:54.716 CXX test/cpp_headers/crc64.o 00:11:54.716 LINK connect_stress 00:11:54.716 LINK boot_partition 00:11:54.716 LINK simple_copy 00:11:54.716 CXX test/cpp_headers/dif.o 00:11:54.975 LINK fused_ordering 00:11:54.975 CC test/nvme/doorbell_aers/doorbell_aers.o 00:11:54.975 CC test/nvme/fdp/fdp.o 00:11:54.975 CXX test/cpp_headers/dma.o 00:11:54.975 LINK nvme_compliance 00:11:54.975 CC test/nvme/cuse/cuse.o 00:11:54.975 CXX test/cpp_headers/endian.o 00:11:54.975 CXX test/cpp_headers/env_dpdk.o 00:11:54.975 CXX test/cpp_headers/env.o 00:11:54.975 LINK doorbell_aers 00:11:54.975 CXX test/cpp_headers/event.o 00:11:54.975 CXX test/cpp_headers/fd_group.o 00:11:55.233 CXX test/cpp_headers/fd.o 00:11:55.233 CXX test/cpp_headers/file.o 00:11:55.233 CXX test/cpp_headers/ftl.o 00:11:55.233 CXX test/cpp_headers/gpt_spec.o 00:11:55.233 LINK fdp 00:11:55.233 CXX test/cpp_headers/hexlify.o 00:11:55.574 CXX test/cpp_headers/histogram_data.o 00:11:55.574 CXX test/cpp_headers/idxd.o 00:11:55.574 CXX test/cpp_headers/idxd_spec.o 00:11:55.574 CXX test/cpp_headers/init.o 00:11:55.574 CXX test/cpp_headers/ioat.o 00:11:55.833 CXX test/cpp_headers/ioat_spec.o 00:11:55.833 CXX test/cpp_headers/iscsi_spec.o 00:11:55.833 CXX test/cpp_headers/json.o 00:11:55.833 CXX test/cpp_headers/jsonrpc.o 00:11:55.833 CXX test/cpp_headers/keyring.o 00:11:55.833 CXX test/cpp_headers/keyring_module.o 00:11:55.833 CXX test/cpp_headers/likely.o 00:11:55.833 CXX test/cpp_headers/log.o 00:11:55.833 CXX test/cpp_headers/lvol.o 00:11:56.091 CXX test/cpp_headers/mmio.o 00:11:56.091 CXX test/cpp_headers/memory.o 00:11:56.091 CXX test/cpp_headers/nbd.o 00:11:56.091 CXX test/cpp_headers/notify.o 00:11:56.091 CXX test/cpp_headers/nvme.o 00:11:56.091 CXX test/cpp_headers/nvme_intel.o 00:11:56.349 CXX test/cpp_headers/nvme_ocssd.o 00:11:56.349 CXX test/cpp_headers/nvme_ocssd_spec.o 00:11:56.349 CXX test/cpp_headers/nvme_spec.o 00:11:56.349 CXX test/cpp_headers/nvme_zns.o 00:11:56.349 CXX test/cpp_headers/nvmf_cmd.o 00:11:56.349 LINK cuse 00:11:56.349 CXX test/cpp_headers/nvmf_fc_spec.o 00:11:56.349 CXX test/cpp_headers/nvmf.o 00:11:56.607 CXX test/cpp_headers/nvmf_spec.o 00:11:56.607 CXX test/cpp_headers/nvmf_transport.o 00:11:56.607 CXX test/cpp_headers/opal.o 00:11:56.607 CXX test/cpp_headers/opal_spec.o 00:11:56.607 CXX test/cpp_headers/pci_ids.o 00:11:56.607 CXX test/cpp_headers/pipe.o 00:11:56.607 CXX test/cpp_headers/queue.o 00:11:56.607 CXX test/cpp_headers/reduce.o 00:11:56.607 CXX test/cpp_headers/rpc.o 00:11:56.607 CXX test/cpp_headers/scheduler.o 00:11:56.607 CXX test/cpp_headers/scsi.o 00:11:56.863 CXX test/cpp_headers/scsi_spec.o 00:11:56.863 CXX test/cpp_headers/sock.o 00:11:56.863 CXX test/cpp_headers/stdinc.o 00:11:56.863 CXX test/cpp_headers/string.o 00:11:56.863 CXX test/cpp_headers/thread.o 00:11:56.863 CXX test/cpp_headers/trace.o 00:11:56.863 CXX test/cpp_headers/trace_parser.o 00:11:56.863 CXX test/cpp_headers/tree.o 00:11:56.863 CXX test/cpp_headers/ublk.o 00:11:57.122 CXX test/cpp_headers/util.o 00:11:57.122 CXX test/cpp_headers/uuid.o 00:11:57.122 CXX test/cpp_headers/version.o 00:11:57.122 CXX test/cpp_headers/vfio_user_pci.o 00:11:57.122 CXX test/cpp_headers/vfio_user_spec.o 00:11:57.122 CXX test/cpp_headers/vhost.o 00:11:57.122 CXX test/cpp_headers/vmd.o 00:11:57.122 CXX test/cpp_headers/xor.o 00:11:57.122 CXX test/cpp_headers/zipf.o 00:11:58.056 LINK esnap 00:11:58.999 00:11:58.999 real 0m58.785s 00:11:58.999 user 5m33.393s 00:11:58.999 sys 1m12.433s 00:11:58.999 10:59:27 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:11:58.999 10:59:27 -- common/autotest_common.sh@10 -- $ set +x 00:11:58.999 ************************************ 00:11:58.999 END TEST make 00:11:58.999 ************************************ 00:11:58.999 10:59:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:11:58.999 10:59:27 -- pm/common@30 -- $ signal_monitor_resources TERM 00:11:58.999 10:59:27 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:11:58.999 10:59:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:58.999 10:59:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:11:58.999 10:59:27 -- pm/common@45 -- $ pid=6044 00:11:58.999 10:59:27 -- pm/common@52 -- $ sudo kill -TERM 6044 00:11:58.999 10:59:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:58.999 10:59:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:11:58.999 10:59:27 -- pm/common@45 -- $ pid=6045 00:11:58.999 10:59:27 -- pm/common@52 -- $ sudo kill -TERM 6045 00:11:58.999 10:59:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:58.999 10:59:27 -- nvmf/common.sh@7 -- # uname -s 00:11:58.999 10:59:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.999 10:59:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.999 10:59:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.999 10:59:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.999 10:59:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.999 10:59:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.999 10:59:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.999 10:59:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.999 10:59:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.999 10:59:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.999 10:59:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:11:58.999 10:59:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:11:58.999 10:59:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.999 10:59:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.999 10:59:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:58.999 10:59:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.999 10:59:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.999 10:59:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.999 10:59:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.999 10:59:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.999 10:59:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.999 10:59:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.999 10:59:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.999 10:59:27 -- paths/export.sh@5 -- # export PATH 00:11:58.999 10:59:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.999 10:59:27 -- nvmf/common.sh@47 -- # : 0 00:11:58.999 10:59:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.999 10:59:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.999 10:59:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.999 10:59:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.999 10:59:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.999 10:59:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.999 10:59:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.999 10:59:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.999 10:59:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:11:58.999 10:59:27 -- spdk/autotest.sh@32 -- # uname -s 00:11:58.999 10:59:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:11:58.999 10:59:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:11:58.999 10:59:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:58.999 10:59:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:11:58.999 10:59:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:58.999 10:59:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:11:58.999 10:59:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:11:58.999 10:59:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:11:59.000 10:59:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:11:59.000 10:59:27 -- spdk/autotest.sh@48 -- # udevadm_pid=67073 00:11:59.000 10:59:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:11:59.000 10:59:27 -- pm/common@17 -- # local monitor 00:11:59.000 10:59:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:59.000 10:59:27 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=67079 00:11:59.000 10:59:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:59.000 10:59:27 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=67083 00:11:59.000 10:59:27 -- pm/common@21 -- # date +%s 00:11:59.000 10:59:27 -- pm/common@26 -- # sleep 1 00:11:59.000 10:59:27 -- pm/common@21 -- # date +%s 00:11:59.000 10:59:27 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713437967 00:11:59.000 10:59:27 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713437967 00:11:59.258 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713437967_collect-vmstat.pm.log 00:11:59.258 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713437967_collect-cpu-load.pm.log 00:12:00.193 10:59:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:12:00.193 10:59:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:12:00.193 10:59:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:00.193 10:59:28 -- common/autotest_common.sh@10 -- # set +x 00:12:00.193 10:59:28 -- spdk/autotest.sh@59 -- # create_test_list 00:12:00.193 10:59:28 -- common/autotest_common.sh@734 -- # xtrace_disable 00:12:00.193 10:59:28 -- common/autotest_common.sh@10 -- # set +x 00:12:00.193 10:59:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:12:00.193 10:59:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:12:00.193 10:59:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:12:00.194 10:59:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:12:00.194 10:59:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:12:00.194 10:59:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:12:00.194 10:59:28 -- common/autotest_common.sh@1441 -- # uname 00:12:00.194 10:59:28 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:12:00.194 10:59:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:12:00.194 10:59:28 -- common/autotest_common.sh@1461 -- # uname 00:12:00.194 10:59:28 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:12:00.194 10:59:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:12:00.194 10:59:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:12:00.194 10:59:28 -- spdk/autotest.sh@72 -- # hash lcov 00:12:00.194 10:59:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:12:00.194 10:59:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:12:00.194 --rc lcov_branch_coverage=1 00:12:00.194 --rc lcov_function_coverage=1 00:12:00.194 --rc genhtml_branch_coverage=1 00:12:00.194 --rc genhtml_function_coverage=1 00:12:00.194 --rc genhtml_legend=1 00:12:00.194 --rc geninfo_all_blocks=1 00:12:00.194 ' 00:12:00.194 10:59:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:12:00.194 --rc lcov_branch_coverage=1 00:12:00.194 --rc lcov_function_coverage=1 00:12:00.194 --rc genhtml_branch_coverage=1 00:12:00.194 --rc genhtml_function_coverage=1 00:12:00.194 --rc genhtml_legend=1 00:12:00.194 --rc geninfo_all_blocks=1 00:12:00.194 ' 00:12:00.194 10:59:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:12:00.194 --rc lcov_branch_coverage=1 00:12:00.194 --rc lcov_function_coverage=1 00:12:00.194 --rc genhtml_branch_coverage=1 00:12:00.194 --rc genhtml_function_coverage=1 00:12:00.194 --rc genhtml_legend=1 00:12:00.194 --rc geninfo_all_blocks=1 00:12:00.194 --no-external' 00:12:00.194 10:59:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:12:00.194 --rc lcov_branch_coverage=1 00:12:00.194 --rc lcov_function_coverage=1 00:12:00.194 --rc genhtml_branch_coverage=1 00:12:00.194 --rc genhtml_function_coverage=1 00:12:00.194 --rc genhtml_legend=1 00:12:00.194 --rc geninfo_all_blocks=1 00:12:00.194 --no-external' 00:12:00.194 10:59:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:12:00.194 lcov: LCOV version 1.14 00:12:00.194 10:59:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:12:08.304 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:12:08.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:12:08.304 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:12:08.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:12:08.304 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:12:08.304 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:12:14.860 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:12:14.860 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:12:27.058 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:12:27.058 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:12:27.059 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:12:27.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:12:27.059 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:12:27.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:12:27.059 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:12:27.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:12:27.059 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:12:27.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:12:27.059 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:12:27.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:12:27.059 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:12:27.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:12:27.059 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:12:27.059 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:12:27.318 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:12:27.318 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:12:27.319 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:12:27.319 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:12:27.578 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:12:27.578 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:12:31.764 10:59:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:12:31.764 10:59:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:31.764 10:59:59 -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 10:59:59 -- spdk/autotest.sh@91 -- # rm -f 00:12:31.764 10:59:59 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:32.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.022 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:12:32.022 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:12:32.022 11:00:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:12:32.022 11:00:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:32.022 11:00:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:32.022 11:00:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:32.022 11:00:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:32.022 11:00:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:32.022 11:00:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:32.022 11:00:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:32.022 11:00:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:32.022 11:00:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:32.022 11:00:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:32.022 11:00:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:32.022 11:00:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:32.022 11:00:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:32.022 11:00:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:32.022 11:00:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:12:32.022 11:00:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:12:32.022 11:00:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:32.022 11:00:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:32.022 11:00:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:32.022 11:00:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:12:32.022 11:00:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:12:32.022 11:00:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:32.022 11:00:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:32.022 11:00:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:12:32.022 11:00:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:32.022 11:00:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:32.023 11:00:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:12:32.023 11:00:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:12:32.023 11:00:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:32.023 No valid GPT data, bailing 00:12:32.023 11:00:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:32.023 11:00:00 -- scripts/common.sh@391 -- # pt= 00:12:32.023 11:00:00 -- scripts/common.sh@392 -- # return 1 00:12:32.023 11:00:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:12:32.023 1+0 records in 00:12:32.023 1+0 records out 00:12:32.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442361 s, 237 MB/s 00:12:32.023 11:00:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:32.023 11:00:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:32.023 11:00:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:12:32.023 11:00:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:12:32.023 11:00:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:12:32.281 No valid GPT data, bailing 00:12:32.281 11:00:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:32.281 11:00:00 -- scripts/common.sh@391 -- # pt= 00:12:32.281 11:00:00 -- scripts/common.sh@392 -- # return 1 00:12:32.281 11:00:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:12:32.281 1+0 records in 00:12:32.281 1+0 records out 00:12:32.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619084 s, 169 MB/s 00:12:32.281 11:00:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:32.281 11:00:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:32.281 11:00:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:12:32.281 11:00:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:12:32.281 11:00:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:12:32.281 No valid GPT data, bailing 00:12:32.281 11:00:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:12:32.281 11:00:00 -- scripts/common.sh@391 -- # pt= 00:12:32.281 11:00:00 -- scripts/common.sh@392 -- # return 1 00:12:32.281 11:00:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:12:32.281 1+0 records in 00:12:32.281 1+0 records out 00:12:32.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00322325 s, 325 MB/s 00:12:32.281 11:00:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:12:32.281 11:00:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:12:32.281 11:00:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:12:32.281 11:00:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:12:32.281 11:00:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:12:32.281 No valid GPT data, bailing 00:12:32.281 11:00:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:12:32.281 11:00:00 -- scripts/common.sh@391 -- # pt= 00:12:32.281 11:00:00 -- scripts/common.sh@392 -- # return 1 00:12:32.281 11:00:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:12:32.281 1+0 records in 00:12:32.281 1+0 records out 00:12:32.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498619 s, 210 MB/s 00:12:32.281 11:00:00 -- spdk/autotest.sh@118 -- # sync 00:12:32.539 11:00:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:12:32.539 11:00:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:12:32.539 11:00:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:12:34.447 11:00:02 -- spdk/autotest.sh@124 -- # uname -s 00:12:34.447 11:00:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:12:34.447 11:00:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:34.447 11:00:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:34.447 11:00:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.447 11:00:02 -- common/autotest_common.sh@10 -- # set +x 00:12:34.447 ************************************ 00:12:34.447 START TEST setup.sh 00:12:34.447 ************************************ 00:12:34.447 11:00:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:34.447 * Looking for test storage... 00:12:34.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:34.447 11:00:02 -- setup/test-setup.sh@10 -- # uname -s 00:12:34.447 11:00:02 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:12:34.447 11:00:02 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:34.447 11:00:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:34.447 11:00:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.447 11:00:02 -- common/autotest_common.sh@10 -- # set +x 00:12:34.447 ************************************ 00:12:34.447 START TEST acl 00:12:34.447 ************************************ 00:12:34.447 11:00:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:34.447 * Looking for test storage... 00:12:34.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:34.447 11:00:03 -- setup/acl.sh@10 -- # get_zoned_devs 00:12:34.447 11:00:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:34.447 11:00:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:34.447 11:00:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:34.447 11:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.447 11:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:34.447 11:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:34.447 11:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:34.447 11:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.447 11:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.447 11:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:34.447 11:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:34.447 11:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:34.447 11:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.447 11:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.447 11:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:12:34.447 11:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:12:34.447 11:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:34.447 11:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.447 11:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:34.447 11:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:12:34.447 11:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:12:34.447 11:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:34.447 11:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:34.447 11:00:03 -- setup/acl.sh@12 -- # devs=() 00:12:34.447 11:00:03 -- setup/acl.sh@12 -- # declare -a devs 00:12:34.447 11:00:03 -- setup/acl.sh@13 -- # drivers=() 00:12:34.447 11:00:03 -- setup/acl.sh@13 -- # declare -A drivers 00:12:34.447 11:00:03 -- setup/acl.sh@51 -- # setup reset 00:12:34.447 11:00:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:34.447 11:00:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:35.382 11:00:03 -- setup/acl.sh@52 -- # collect_setup_devs 00:12:35.382 11:00:03 -- setup/acl.sh@16 -- # local dev driver 00:12:35.382 11:00:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:35.382 11:00:03 -- setup/acl.sh@15 -- # setup output status 00:12:35.382 11:00:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:35.382 11:00:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # continue 00:12:35.948 11:00:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:35.948 Hugepages 00:12:35.948 node hugesize free / total 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # continue 00:12:35.948 11:00:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:35.948 00:12:35.948 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # continue 00:12:35.948 11:00:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:12:35.948 11:00:04 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:12:35.948 11:00:04 -- setup/acl.sh@20 -- # continue 00:12:35.948 11:00:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:35.948 11:00:04 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:12:35.948 11:00:04 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:35.948 11:00:04 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:35.948 11:00:04 -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:35.948 11:00:04 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:35.948 11:00:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:36.206 11:00:04 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:12:36.206 11:00:04 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:36.206 11:00:04 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:36.206 11:00:04 -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:36.206 11:00:04 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:36.206 11:00:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:36.206 11:00:04 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:12:36.206 11:00:04 -- setup/acl.sh@54 -- # run_test denied denied 00:12:36.206 11:00:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:36.206 11:00:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.206 11:00:04 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 ************************************ 00:12:36.206 START TEST denied 00:12:36.206 ************************************ 00:12:36.206 11:00:04 -- common/autotest_common.sh@1111 -- # denied 00:12:36.206 11:00:04 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:12:36.206 11:00:04 -- setup/acl.sh@38 -- # setup output config 00:12:36.206 11:00:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:36.206 11:00:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:36.206 11:00:04 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:12:37.140 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:12:37.140 11:00:05 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:12:37.140 11:00:05 -- setup/acl.sh@28 -- # local dev driver 00:12:37.140 11:00:05 -- setup/acl.sh@30 -- # for dev in "$@" 00:12:37.140 11:00:05 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:12:37.140 11:00:05 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:12:37.140 11:00:05 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:37.140 11:00:05 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:37.140 11:00:05 -- setup/acl.sh@41 -- # setup reset 00:12:37.140 11:00:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:37.140 11:00:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:37.733 00:12:37.733 real 0m1.416s 00:12:37.733 user 0m0.534s 00:12:37.733 sys 0m0.820s 00:12:37.733 11:00:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:37.733 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:12:37.733 ************************************ 00:12:37.733 END TEST denied 00:12:37.733 ************************************ 00:12:37.733 11:00:06 -- setup/acl.sh@55 -- # run_test allowed allowed 00:12:37.733 11:00:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:37.733 11:00:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.733 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:12:37.733 ************************************ 00:12:37.733 START TEST allowed 00:12:37.733 ************************************ 00:12:37.733 11:00:06 -- common/autotest_common.sh@1111 -- # allowed 00:12:37.733 11:00:06 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:12:37.733 11:00:06 -- setup/acl.sh@45 -- # setup output config 00:12:37.733 11:00:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:37.733 11:00:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:37.733 11:00:06 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:12:38.668 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:38.668 11:00:06 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:12:38.668 11:00:06 -- setup/acl.sh@28 -- # local dev driver 00:12:38.668 11:00:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:12:38.668 11:00:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:12:38.668 11:00:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:12:38.668 11:00:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:38.668 11:00:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:38.668 11:00:06 -- setup/acl.sh@48 -- # setup reset 00:12:38.668 11:00:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:38.668 11:00:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:39.234 00:12:39.234 real 0m1.510s 00:12:39.234 user 0m0.663s 00:12:39.234 sys 0m0.817s 00:12:39.234 11:00:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:39.234 11:00:07 -- common/autotest_common.sh@10 -- # set +x 00:12:39.234 ************************************ 00:12:39.234 END TEST allowed 00:12:39.234 ************************************ 00:12:39.234 00:12:39.234 real 0m4.802s 00:12:39.234 user 0m2.045s 00:12:39.234 sys 0m2.654s 00:12:39.234 11:00:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:39.234 11:00:07 -- common/autotest_common.sh@10 -- # set +x 00:12:39.234 ************************************ 00:12:39.234 END TEST acl 00:12:39.234 ************************************ 00:12:39.234 11:00:07 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:39.234 11:00:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:39.234 11:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.234 11:00:07 -- common/autotest_common.sh@10 -- # set +x 00:12:39.234 ************************************ 00:12:39.234 START TEST hugepages 00:12:39.234 ************************************ 00:12:39.234 11:00:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:39.493 * Looking for test storage... 00:12:39.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:39.493 11:00:07 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:12:39.493 11:00:07 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:12:39.493 11:00:07 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:12:39.493 11:00:07 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:12:39.493 11:00:07 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:12:39.493 11:00:07 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:12:39.493 11:00:07 -- setup/common.sh@17 -- # local get=Hugepagesize 00:12:39.493 11:00:07 -- setup/common.sh@18 -- # local node= 00:12:39.493 11:00:07 -- setup/common.sh@19 -- # local var val 00:12:39.493 11:00:07 -- setup/common.sh@20 -- # local mem_f mem 00:12:39.493 11:00:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:39.493 11:00:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:39.493 11:00:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:39.493 11:00:07 -- setup/common.sh@28 -- # mapfile -t mem 00:12:39.493 11:00:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:39.493 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.493 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4016712 kB' 'MemAvailable: 7384032 kB' 'Buffers: 3456 kB' 'Cached: 3565484 kB' 'SwapCached: 0 kB' 'Active: 875912 kB' 'Inactive: 2799204 kB' 'Active(anon): 116668 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 107608 kB' 'Mapped: 48988 kB' 'Shmem: 10492 kB' 'KReclaimable: 91620 kB' 'Slab: 177576 kB' 'SReclaimable: 91620 kB' 'SUnreclaim: 85956 kB' 'KernelStack: 6672 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 339364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.494 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.494 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # continue 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # IFS=': ' 00:12:39.495 11:00:07 -- setup/common.sh@31 -- # read -r var val _ 00:12:39.495 11:00:07 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:39.495 11:00:07 -- setup/common.sh@33 -- # echo 2048 00:12:39.495 11:00:07 -- setup/common.sh@33 -- # return 0 00:12:39.495 11:00:07 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:12:39.495 11:00:07 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:12:39.495 11:00:07 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:12:39.495 11:00:07 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:12:39.495 11:00:07 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:12:39.495 11:00:07 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:12:39.495 11:00:07 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:12:39.495 11:00:07 -- setup/hugepages.sh@207 -- # get_nodes 00:12:39.495 11:00:07 -- setup/hugepages.sh@27 -- # local node 00:12:39.495 11:00:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:39.495 11:00:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:12:39.495 11:00:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:39.495 11:00:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:39.495 11:00:07 -- setup/hugepages.sh@208 -- # clear_hp 00:12:39.495 11:00:07 -- setup/hugepages.sh@37 -- # local node hp 00:12:39.495 11:00:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:39.495 11:00:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:39.495 11:00:07 -- setup/hugepages.sh@41 -- # echo 0 00:12:39.495 11:00:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:39.495 11:00:07 -- setup/hugepages.sh@41 -- # echo 0 00:12:39.495 11:00:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:39.495 11:00:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:39.495 11:00:07 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:12:39.495 11:00:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:39.495 11:00:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.495 11:00:07 -- common/autotest_common.sh@10 -- # set +x 00:12:39.495 ************************************ 00:12:39.495 START TEST default_setup 00:12:39.495 ************************************ 00:12:39.495 11:00:08 -- common/autotest_common.sh@1111 -- # default_setup 00:12:39.495 11:00:08 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:12:39.495 11:00:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:12:39.495 11:00:08 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:39.495 11:00:08 -- setup/hugepages.sh@51 -- # shift 00:12:39.495 11:00:08 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:39.495 11:00:08 -- setup/hugepages.sh@52 -- # local node_ids 00:12:39.495 11:00:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:39.495 11:00:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:39.495 11:00:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:39.495 11:00:08 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:39.495 11:00:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:39.495 11:00:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:39.495 11:00:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:39.495 11:00:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:39.495 11:00:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:39.495 11:00:08 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:39.495 11:00:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:39.495 11:00:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:39.495 11:00:08 -- setup/hugepages.sh@73 -- # return 0 00:12:39.495 11:00:08 -- setup/hugepages.sh@137 -- # setup output 00:12:39.495 11:00:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:39.495 11:00:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:40.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.323 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.323 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.323 11:00:08 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:12:40.323 11:00:08 -- setup/hugepages.sh@89 -- # local node 00:12:40.323 11:00:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:40.323 11:00:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:40.323 11:00:08 -- setup/hugepages.sh@92 -- # local surp 00:12:40.323 11:00:08 -- setup/hugepages.sh@93 -- # local resv 00:12:40.323 11:00:08 -- setup/hugepages.sh@94 -- # local anon 00:12:40.323 11:00:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:40.323 11:00:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:40.323 11:00:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:40.323 11:00:08 -- setup/common.sh@18 -- # local node= 00:12:40.323 11:00:08 -- setup/common.sh@19 -- # local var val 00:12:40.323 11:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:12:40.323 11:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:40.323 11:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:40.323 11:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:40.323 11:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:12:40.323 11:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:40.323 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.323 11:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6107688 kB' 'MemAvailable: 9474856 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 892204 kB' 'Inactive: 2799212 kB' 'Active(anon): 132960 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 888 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 49128 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177280 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85980 kB' 'KernelStack: 6592 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:40.323 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.323 11:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.323 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.323 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.323 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.323 11:00:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.323 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.323 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.323 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.323 11:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.323 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.324 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.324 11:00:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:40.325 11:00:08 -- setup/common.sh@33 -- # echo 0 00:12:40.325 11:00:08 -- setup/common.sh@33 -- # return 0 00:12:40.325 11:00:08 -- setup/hugepages.sh@97 -- # anon=0 00:12:40.325 11:00:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:40.325 11:00:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:40.325 11:00:08 -- setup/common.sh@18 -- # local node= 00:12:40.325 11:00:08 -- setup/common.sh@19 -- # local var val 00:12:40.325 11:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:12:40.325 11:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:40.325 11:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:40.325 11:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:40.325 11:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:12:40.325 11:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6107188 kB' 'MemAvailable: 9474356 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 892204 kB' 'Inactive: 2799212 kB' 'Active(anon): 132960 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 888 kB' 'Writeback: 0 kB' 'AnonPages: 124212 kB' 'Mapped: 49128 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177280 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85980 kB' 'KernelStack: 6592 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.325 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.325 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.326 11:00:08 -- setup/common.sh@33 -- # echo 0 00:12:40.326 11:00:08 -- setup/common.sh@33 -- # return 0 00:12:40.326 11:00:08 -- setup/hugepages.sh@99 -- # surp=0 00:12:40.326 11:00:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:40.326 11:00:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:40.326 11:00:08 -- setup/common.sh@18 -- # local node= 00:12:40.326 11:00:08 -- setup/common.sh@19 -- # local var val 00:12:40.326 11:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:12:40.326 11:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:40.326 11:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:40.326 11:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:40.326 11:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:12:40.326 11:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6107192 kB' 'MemAvailable: 9474360 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 892436 kB' 'Inactive: 2799212 kB' 'Active(anon): 133192 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 888 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 49128 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177280 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85980 kB' 'KernelStack: 6628 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.326 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.326 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.327 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.327 11:00:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:40.328 11:00:08 -- setup/common.sh@33 -- # echo 0 00:12:40.328 11:00:08 -- setup/common.sh@33 -- # return 0 00:12:40.328 11:00:08 -- setup/hugepages.sh@100 -- # resv=0 00:12:40.328 nr_hugepages=1024 00:12:40.328 11:00:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:40.328 resv_hugepages=0 00:12:40.328 11:00:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:40.328 surplus_hugepages=0 00:12:40.328 11:00:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:40.328 anon_hugepages=0 00:12:40.328 11:00:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:40.328 11:00:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:40.328 11:00:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:40.328 11:00:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:40.328 11:00:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:40.328 11:00:08 -- setup/common.sh@18 -- # local node= 00:12:40.328 11:00:08 -- setup/common.sh@19 -- # local var val 00:12:40.328 11:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:12:40.328 11:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:40.328 11:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:40.328 11:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:40.328 11:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:12:40.328 11:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6107192 kB' 'MemAvailable: 9474360 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 892120 kB' 'Inactive: 2799212 kB' 'Active(anon): 132876 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 888 kB' 'Writeback: 0 kB' 'AnonPages: 124080 kB' 'Mapped: 49012 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177244 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85944 kB' 'KernelStack: 6576 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.328 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.328 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.590 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.590 11:00:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:40.590 11:00:08 -- setup/common.sh@33 -- # echo 1024 00:12:40.590 11:00:08 -- setup/common.sh@33 -- # return 0 00:12:40.590 11:00:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:40.590 11:00:08 -- setup/hugepages.sh@112 -- # get_nodes 00:12:40.590 11:00:08 -- setup/hugepages.sh@27 -- # local node 00:12:40.590 11:00:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:40.590 11:00:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:40.590 11:00:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:40.591 11:00:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:40.591 11:00:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:40.591 11:00:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:40.591 11:00:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:40.591 11:00:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:40.591 11:00:08 -- setup/common.sh@18 -- # local node=0 00:12:40.591 11:00:08 -- setup/common.sh@19 -- # local var val 00:12:40.591 11:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:12:40.591 11:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:40.591 11:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:40.591 11:00:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:40.591 11:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:12:40.591 11:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6107580 kB' 'MemUsed: 6134392 kB' 'SwapCached: 0 kB' 'Active: 892020 kB' 'Inactive: 2799212 kB' 'Active(anon): 132776 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 888 kB' 'Writeback: 0 kB' 'FilePages: 3568932 kB' 'Mapped: 49012 kB' 'AnonPages: 123980 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 177244 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.591 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.591 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.592 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.592 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.592 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.592 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.592 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.592 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.592 11:00:09 -- setup/common.sh@32 -- # continue 00:12:40.592 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:40.592 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:40.592 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:40.592 11:00:09 -- setup/common.sh@33 -- # echo 0 00:12:40.592 11:00:09 -- setup/common.sh@33 -- # return 0 00:12:40.592 11:00:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:40.592 11:00:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:40.592 11:00:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:40.592 11:00:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:40.592 node0=1024 expecting 1024 00:12:40.592 11:00:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:40.592 11:00:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:40.592 00:12:40.592 real 0m0.951s 00:12:40.592 user 0m0.452s 00:12:40.592 sys 0m0.471s 00:12:40.592 11:00:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:40.592 11:00:09 -- common/autotest_common.sh@10 -- # set +x 00:12:40.592 ************************************ 00:12:40.592 END TEST default_setup 00:12:40.592 ************************************ 00:12:40.592 11:00:09 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:12:40.592 11:00:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:40.592 11:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.592 11:00:09 -- common/autotest_common.sh@10 -- # set +x 00:12:40.592 ************************************ 00:12:40.592 START TEST per_node_1G_alloc 00:12:40.592 ************************************ 00:12:40.592 11:00:09 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:12:40.592 11:00:09 -- setup/hugepages.sh@143 -- # local IFS=, 00:12:40.592 11:00:09 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:12:40.592 11:00:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:12:40.592 11:00:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:40.592 11:00:09 -- setup/hugepages.sh@51 -- # shift 00:12:40.592 11:00:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:40.592 11:00:09 -- setup/hugepages.sh@52 -- # local node_ids 00:12:40.592 11:00:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:40.592 11:00:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:40.592 11:00:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:40.592 11:00:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:40.592 11:00:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:40.592 11:00:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:40.592 11:00:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:40.592 11:00:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:40.592 11:00:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:40.592 11:00:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:40.592 11:00:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:40.592 11:00:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:12:40.592 11:00:09 -- setup/hugepages.sh@73 -- # return 0 00:12:40.592 11:00:09 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:12:40.592 11:00:09 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:12:40.592 11:00:09 -- setup/hugepages.sh@146 -- # setup output 00:12:40.592 11:00:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:40.592 11:00:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:40.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.850 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:40.850 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:41.109 11:00:09 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:12:41.109 11:00:09 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:12:41.109 11:00:09 -- setup/hugepages.sh@89 -- # local node 00:12:41.109 11:00:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:41.109 11:00:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:41.109 11:00:09 -- setup/hugepages.sh@92 -- # local surp 00:12:41.109 11:00:09 -- setup/hugepages.sh@93 -- # local resv 00:12:41.109 11:00:09 -- setup/hugepages.sh@94 -- # local anon 00:12:41.109 11:00:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:41.109 11:00:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:41.109 11:00:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:41.109 11:00:09 -- setup/common.sh@18 -- # local node= 00:12:41.109 11:00:09 -- setup/common.sh@19 -- # local var val 00:12:41.109 11:00:09 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.109 11:00:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.109 11:00:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.109 11:00:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.109 11:00:09 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.109 11:00:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.109 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7151420 kB' 'MemAvailable: 10518596 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 892420 kB' 'Inactive: 2799220 kB' 'Active(anon): 133176 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1044 kB' 'Writeback: 0 kB' 'AnonPages: 124304 kB' 'Mapped: 49148 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177236 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85936 kB' 'KernelStack: 6568 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.110 11:00:09 -- setup/common.sh@33 -- # echo 0 00:12:41.110 11:00:09 -- setup/common.sh@33 -- # return 0 00:12:41.110 11:00:09 -- setup/hugepages.sh@97 -- # anon=0 00:12:41.110 11:00:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:41.110 11:00:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:41.110 11:00:09 -- setup/common.sh@18 -- # local node= 00:12:41.110 11:00:09 -- setup/common.sh@19 -- # local var val 00:12:41.110 11:00:09 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.110 11:00:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.110 11:00:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.110 11:00:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.110 11:00:09 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.110 11:00:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7151420 kB' 'MemAvailable: 10518596 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 892004 kB' 'Inactive: 2799220 kB' 'Active(anon): 132760 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1044 kB' 'Writeback: 0 kB' 'AnonPages: 123828 kB' 'Mapped: 49024 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177236 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85936 kB' 'KernelStack: 6592 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.110 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.110 11:00:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.111 11:00:09 -- setup/common.sh@33 -- # echo 0 00:12:41.111 11:00:09 -- setup/common.sh@33 -- # return 0 00:12:41.111 11:00:09 -- setup/hugepages.sh@99 -- # surp=0 00:12:41.111 11:00:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:41.111 11:00:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:41.111 11:00:09 -- setup/common.sh@18 -- # local node= 00:12:41.111 11:00:09 -- setup/common.sh@19 -- # local var val 00:12:41.111 11:00:09 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.111 11:00:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.111 11:00:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.111 11:00:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.111 11:00:09 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.111 11:00:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7151420 kB' 'MemAvailable: 10518596 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 891964 kB' 'Inactive: 2799220 kB' 'Active(anon): 132720 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1044 kB' 'Writeback: 0 kB' 'AnonPages: 124080 kB' 'Mapped: 49024 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177236 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85936 kB' 'KernelStack: 6592 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.111 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.111 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.112 11:00:09 -- setup/common.sh@33 -- # echo 0 00:12:41.112 11:00:09 -- setup/common.sh@33 -- # return 0 00:12:41.112 11:00:09 -- setup/hugepages.sh@100 -- # resv=0 00:12:41.112 nr_hugepages=512 00:12:41.112 11:00:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:41.112 resv_hugepages=0 00:12:41.112 11:00:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:41.112 surplus_hugepages=0 00:12:41.112 11:00:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:41.112 anon_hugepages=0 00:12:41.112 11:00:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:41.112 11:00:09 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:41.112 11:00:09 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:41.112 11:00:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:41.112 11:00:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:41.112 11:00:09 -- setup/common.sh@18 -- # local node= 00:12:41.112 11:00:09 -- setup/common.sh@19 -- # local var val 00:12:41.112 11:00:09 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.112 11:00:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.112 11:00:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.112 11:00:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.112 11:00:09 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.112 11:00:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7154932 kB' 'MemAvailable: 10522108 kB' 'Buffers: 3456 kB' 'Cached: 3565476 kB' 'SwapCached: 0 kB' 'Active: 892276 kB' 'Inactive: 2799220 kB' 'Active(anon): 133032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1044 kB' 'Writeback: 0 kB' 'AnonPages: 124144 kB' 'Mapped: 49024 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177232 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85932 kB' 'KernelStack: 6592 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.112 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.112 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.112 11:00:09 -- setup/common.sh@33 -- # echo 512 00:12:41.112 11:00:09 -- setup/common.sh@33 -- # return 0 00:12:41.112 11:00:09 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:41.112 11:00:09 -- setup/hugepages.sh@112 -- # get_nodes 00:12:41.112 11:00:09 -- setup/hugepages.sh@27 -- # local node 00:12:41.112 11:00:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:41.112 11:00:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:41.112 11:00:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:41.112 11:00:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:41.112 11:00:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:41.112 11:00:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:41.112 11:00:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:41.112 11:00:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:41.112 11:00:09 -- setup/common.sh@18 -- # local node=0 00:12:41.112 11:00:09 -- setup/common.sh@19 -- # local var val 00:12:41.112 11:00:09 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.112 11:00:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.112 11:00:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:41.112 11:00:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:41.113 11:00:09 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.113 11:00:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7154932 kB' 'MemUsed: 5087040 kB' 'SwapCached: 0 kB' 'Active: 891980 kB' 'Inactive: 2799220 kB' 'Active(anon): 132736 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1044 kB' 'Writeback: 0 kB' 'FilePages: 3568932 kB' 'Mapped: 49024 kB' 'AnonPages: 123840 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 177232 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # continue 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.113 11:00:09 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.113 11:00:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.113 11:00:09 -- setup/common.sh@33 -- # echo 0 00:12:41.113 11:00:09 -- setup/common.sh@33 -- # return 0 00:12:41.113 11:00:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:41.113 11:00:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:41.113 11:00:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:41.113 11:00:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:41.113 node0=512 expecting 512 00:12:41.113 11:00:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:41.113 11:00:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:41.113 00:12:41.113 real 0m0.539s 00:12:41.113 user 0m0.296s 00:12:41.113 sys 0m0.279s 00:12:41.113 11:00:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.113 11:00:09 -- common/autotest_common.sh@10 -- # set +x 00:12:41.113 ************************************ 00:12:41.113 END TEST per_node_1G_alloc 00:12:41.113 ************************************ 00:12:41.113 11:00:09 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:12:41.113 11:00:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:41.113 11:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.113 11:00:09 -- common/autotest_common.sh@10 -- # set +x 00:12:41.371 ************************************ 00:12:41.371 START TEST even_2G_alloc 00:12:41.371 ************************************ 00:12:41.371 11:00:09 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:12:41.371 11:00:09 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:12:41.371 11:00:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:12:41.371 11:00:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:41.371 11:00:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:41.371 11:00:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:41.371 11:00:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:41.371 11:00:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:41.371 11:00:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:41.371 11:00:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:41.371 11:00:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:41.371 11:00:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:41.371 11:00:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:41.371 11:00:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:41.371 11:00:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:41.371 11:00:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:41.371 11:00:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:12:41.371 11:00:09 -- setup/hugepages.sh@83 -- # : 0 00:12:41.371 11:00:09 -- setup/hugepages.sh@84 -- # : 0 00:12:41.371 11:00:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:41.371 11:00:09 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:12:41.371 11:00:09 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:12:41.371 11:00:09 -- setup/hugepages.sh@153 -- # setup output 00:12:41.371 11:00:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:41.371 11:00:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:41.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:41.635 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:41.636 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:41.636 11:00:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:12:41.636 11:00:10 -- setup/hugepages.sh@89 -- # local node 00:12:41.636 11:00:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:41.636 11:00:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:41.636 11:00:10 -- setup/hugepages.sh@92 -- # local surp 00:12:41.636 11:00:10 -- setup/hugepages.sh@93 -- # local resv 00:12:41.636 11:00:10 -- setup/hugepages.sh@94 -- # local anon 00:12:41.636 11:00:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:41.636 11:00:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:41.636 11:00:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:41.636 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:41.636 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:41.636 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.636 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.636 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.636 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.636 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.636 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6108064 kB' 'MemAvailable: 9475244 kB' 'Buffers: 3456 kB' 'Cached: 3565480 kB' 'SwapCached: 0 kB' 'Active: 892964 kB' 'Inactive: 2799224 kB' 'Active(anon): 133720 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1200 kB' 'Writeback: 0 kB' 'AnonPages: 124564 kB' 'Mapped: 49136 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177264 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85964 kB' 'KernelStack: 6548 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.636 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:41.636 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:41.636 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:41.636 11:00:10 -- setup/hugepages.sh@97 -- # anon=0 00:12:41.636 11:00:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:41.636 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:41.636 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:41.636 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:41.636 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.636 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.636 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.636 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.636 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.636 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.636 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6108708 kB' 'MemAvailable: 9475888 kB' 'Buffers: 3456 kB' 'Cached: 3565480 kB' 'SwapCached: 0 kB' 'Active: 892336 kB' 'Inactive: 2799224 kB' 'Active(anon): 133092 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1200 kB' 'Writeback: 0 kB' 'AnonPages: 123936 kB' 'Mapped: 49136 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177264 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85964 kB' 'KernelStack: 6516 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.637 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.637 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.638 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:41.638 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:41.638 11:00:10 -- setup/hugepages.sh@99 -- # surp=0 00:12:41.638 11:00:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:41.638 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:41.638 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:41.638 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:41.638 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.638 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.638 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.638 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.638 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.638 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6108708 kB' 'MemAvailable: 9475888 kB' 'Buffers: 3456 kB' 'Cached: 3565480 kB' 'SwapCached: 0 kB' 'Active: 892040 kB' 'Inactive: 2799224 kB' 'Active(anon): 132796 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1200 kB' 'Writeback: 0 kB' 'AnonPages: 123896 kB' 'Mapped: 49136 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177264 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85964 kB' 'KernelStack: 6532 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.638 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.638 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:41.639 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:41.639 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:41.639 11:00:10 -- setup/hugepages.sh@100 -- # resv=0 00:12:41.639 nr_hugepages=1024 00:12:41.639 11:00:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:41.639 resv_hugepages=0 00:12:41.639 11:00:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:41.639 surplus_hugepages=0 00:12:41.639 11:00:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:41.639 anon_hugepages=0 00:12:41.639 11:00:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:41.639 11:00:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:41.639 11:00:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:41.639 11:00:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:41.639 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:41.639 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:41.639 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:41.639 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.639 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.639 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:41.639 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:41.639 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.639 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6108708 kB' 'MemAvailable: 9475888 kB' 'Buffers: 3456 kB' 'Cached: 3565480 kB' 'SwapCached: 0 kB' 'Active: 892280 kB' 'Inactive: 2799224 kB' 'Active(anon): 133036 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1200 kB' 'Writeback: 0 kB' 'AnonPages: 124136 kB' 'Mapped: 49136 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177264 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85964 kB' 'KernelStack: 6532 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.639 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.639 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:41.640 11:00:10 -- setup/common.sh@33 -- # echo 1024 00:12:41.640 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:41.640 11:00:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:41.640 11:00:10 -- setup/hugepages.sh@112 -- # get_nodes 00:12:41.640 11:00:10 -- setup/hugepages.sh@27 -- # local node 00:12:41.640 11:00:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:41.640 11:00:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:41.640 11:00:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:41.640 11:00:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:41.640 11:00:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:41.640 11:00:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:41.640 11:00:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:41.640 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:41.640 11:00:10 -- setup/common.sh@18 -- # local node=0 00:12:41.640 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:41.640 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:41.640 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:41.640 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:41.640 11:00:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:41.640 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:41.640 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6108884 kB' 'MemUsed: 6133088 kB' 'SwapCached: 0 kB' 'Active: 892168 kB' 'Inactive: 2799224 kB' 'Active(anon): 132924 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1200 kB' 'Writeback: 0 kB' 'FilePages: 3568936 kB' 'Mapped: 49032 kB' 'AnonPages: 124024 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 177260 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.640 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.640 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # continue 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:41.641 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:41.641 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:41.641 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:41.641 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:41.641 11:00:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:41.641 11:00:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:41.641 11:00:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:41.641 11:00:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:41.641 node0=1024 expecting 1024 00:12:41.641 11:00:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:41.641 11:00:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:41.641 00:12:41.641 real 0m0.497s 00:12:41.641 user 0m0.251s 00:12:41.641 sys 0m0.281s 00:12:41.641 11:00:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.641 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:12:41.641 ************************************ 00:12:41.641 END TEST even_2G_alloc 00:12:41.641 ************************************ 00:12:41.898 11:00:10 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:12:41.898 11:00:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:41.898 11:00:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.898 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:12:41.898 ************************************ 00:12:41.898 START TEST odd_alloc 00:12:41.898 ************************************ 00:12:41.898 11:00:10 -- common/autotest_common.sh@1111 -- # odd_alloc 00:12:41.898 11:00:10 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:12:41.898 11:00:10 -- setup/hugepages.sh@49 -- # local size=2098176 00:12:41.898 11:00:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:41.898 11:00:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:41.898 11:00:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:12:41.898 11:00:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:41.898 11:00:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:41.898 11:00:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:41.898 11:00:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:12:41.898 11:00:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:41.898 11:00:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:41.898 11:00:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:41.898 11:00:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:41.898 11:00:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:41.898 11:00:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:41.898 11:00:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:12:41.898 11:00:10 -- setup/hugepages.sh@83 -- # : 0 00:12:41.898 11:00:10 -- setup/hugepages.sh@84 -- # : 0 00:12:41.898 11:00:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:41.898 11:00:10 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:12:41.898 11:00:10 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:12:41.898 11:00:10 -- setup/hugepages.sh@160 -- # setup output 00:12:41.898 11:00:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:41.898 11:00:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:42.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:42.157 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:42.157 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:42.157 11:00:10 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:12:42.157 11:00:10 -- setup/hugepages.sh@89 -- # local node 00:12:42.157 11:00:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:42.157 11:00:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:42.157 11:00:10 -- setup/hugepages.sh@92 -- # local surp 00:12:42.157 11:00:10 -- setup/hugepages.sh@93 -- # local resv 00:12:42.157 11:00:10 -- setup/hugepages.sh@94 -- # local anon 00:12:42.157 11:00:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:42.157 11:00:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:42.157 11:00:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:42.157 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:42.157 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:42.157 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:42.157 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:42.157 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:42.157 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:42.157 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:42.157 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6111076 kB' 'MemAvailable: 9478260 kB' 'Buffers: 3456 kB' 'Cached: 3565484 kB' 'SwapCached: 0 kB' 'Active: 892632 kB' 'Inactive: 2799228 kB' 'Active(anon): 133388 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'AnonPages: 124492 kB' 'Mapped: 49168 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177284 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85984 kB' 'KernelStack: 6532 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.157 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.157 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.158 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.158 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.420 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.420 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.420 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.420 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.420 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.420 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.420 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:42.420 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:42.420 11:00:10 -- setup/hugepages.sh@97 -- # anon=0 00:12:42.420 11:00:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:42.420 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:42.420 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:42.420 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:42.420 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:42.420 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:42.420 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:42.420 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:42.420 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:42.420 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:42.420 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.420 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6111484 kB' 'MemAvailable: 9478668 kB' 'Buffers: 3456 kB' 'Cached: 3565484 kB' 'SwapCached: 0 kB' 'Active: 892296 kB' 'Inactive: 2799228 kB' 'Active(anon): 133052 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'AnonPages: 124164 kB' 'Mapped: 49040 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177276 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85976 kB' 'KernelStack: 6592 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.421 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.421 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.422 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:42.422 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:42.422 11:00:10 -- setup/hugepages.sh@99 -- # surp=0 00:12:42.422 11:00:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:42.422 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:42.422 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:42.422 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:42.422 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:42.422 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:42.422 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:42.422 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:42.422 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:42.422 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6111484 kB' 'MemAvailable: 9478668 kB' 'Buffers: 3456 kB' 'Cached: 3565484 kB' 'SwapCached: 0 kB' 'Active: 892008 kB' 'Inactive: 2799228 kB' 'Active(anon): 132764 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'AnonPages: 123904 kB' 'Mapped: 49040 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177276 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85976 kB' 'KernelStack: 6592 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.422 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.422 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.423 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.423 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:42.423 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:42.423 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:42.423 11:00:10 -- setup/hugepages.sh@100 -- # resv=0 00:12:42.423 nr_hugepages=1025 00:12:42.423 11:00:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:12:42.423 resv_hugepages=0 00:12:42.423 11:00:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:42.423 surplus_hugepages=0 00:12:42.423 11:00:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:42.423 anon_hugepages=0 00:12:42.423 11:00:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:42.424 11:00:10 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:42.424 11:00:10 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:12:42.424 11:00:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:42.424 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:42.424 11:00:10 -- setup/common.sh@18 -- # local node= 00:12:42.424 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:42.424 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:42.424 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:42.424 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:42.424 11:00:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:42.424 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:42.424 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6112004 kB' 'MemAvailable: 9479188 kB' 'Buffers: 3456 kB' 'Cached: 3565484 kB' 'SwapCached: 0 kB' 'Active: 892268 kB' 'Inactive: 2799228 kB' 'Active(anon): 133024 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'AnonPages: 124164 kB' 'Mapped: 49040 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177276 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85976 kB' 'KernelStack: 6592 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.424 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.424 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:42.425 11:00:10 -- setup/common.sh@33 -- # echo 1025 00:12:42.425 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:42.425 11:00:10 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:42.425 11:00:10 -- setup/hugepages.sh@112 -- # get_nodes 00:12:42.425 11:00:10 -- setup/hugepages.sh@27 -- # local node 00:12:42.425 11:00:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:42.425 11:00:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:12:42.425 11:00:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:42.425 11:00:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:42.425 11:00:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:42.425 11:00:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:42.425 11:00:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:42.425 11:00:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:42.425 11:00:10 -- setup/common.sh@18 -- # local node=0 00:12:42.425 11:00:10 -- setup/common.sh@19 -- # local var val 00:12:42.425 11:00:10 -- setup/common.sh@20 -- # local mem_f mem 00:12:42.425 11:00:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:42.425 11:00:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:42.425 11:00:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:42.425 11:00:10 -- setup/common.sh@28 -- # mapfile -t mem 00:12:42.425 11:00:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6112688 kB' 'MemUsed: 6129284 kB' 'SwapCached: 0 kB' 'Active: 892152 kB' 'Inactive: 2799228 kB' 'Active(anon): 132908 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'FilePages: 3568940 kB' 'Mapped: 49040 kB' 'AnonPages: 124008 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 177276 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.425 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.425 11:00:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # continue 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.426 11:00:10 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.426 11:00:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:42.426 11:00:10 -- setup/common.sh@33 -- # echo 0 00:12:42.426 11:00:10 -- setup/common.sh@33 -- # return 0 00:12:42.426 11:00:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:42.426 11:00:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:42.426 11:00:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:42.426 11:00:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:42.426 node0=1025 expecting 1025 00:12:42.426 11:00:10 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:12:42.426 11:00:10 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:12:42.426 00:12:42.426 real 0m0.521s 00:12:42.426 user 0m0.215s 00:12:42.426 sys 0m0.311s 00:12:42.426 11:00:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:42.426 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:12:42.426 ************************************ 00:12:42.426 END TEST odd_alloc 00:12:42.426 ************************************ 00:12:42.426 11:00:10 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:12:42.426 11:00:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:42.426 11:00:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.426 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:12:42.426 ************************************ 00:12:42.426 START TEST custom_alloc 00:12:42.426 ************************************ 00:12:42.426 11:00:11 -- common/autotest_common.sh@1111 -- # custom_alloc 00:12:42.426 11:00:11 -- setup/hugepages.sh@167 -- # local IFS=, 00:12:42.426 11:00:11 -- setup/hugepages.sh@169 -- # local node 00:12:42.426 11:00:11 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:12:42.426 11:00:11 -- setup/hugepages.sh@170 -- # local nodes_hp 00:12:42.426 11:00:11 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:12:42.426 11:00:11 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:12:42.426 11:00:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:12:42.427 11:00:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:42.427 11:00:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:42.427 11:00:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:42.427 11:00:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:42.427 11:00:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:42.427 11:00:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:42.427 11:00:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:42.427 11:00:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:42.427 11:00:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:12:42.427 11:00:11 -- setup/hugepages.sh@83 -- # : 0 00:12:42.427 11:00:11 -- setup/hugepages.sh@84 -- # : 0 00:12:42.427 11:00:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:12:42.427 11:00:11 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:12:42.427 11:00:11 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:12:42.427 11:00:11 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:12:42.427 11:00:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:42.427 11:00:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:42.427 11:00:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:42.427 11:00:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:42.427 11:00:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:42.427 11:00:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:42.427 11:00:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:12:42.427 11:00:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:12:42.427 11:00:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:12:42.427 11:00:11 -- setup/hugepages.sh@78 -- # return 0 00:12:42.427 11:00:11 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:12:42.427 11:00:11 -- setup/hugepages.sh@187 -- # setup output 00:12:42.427 11:00:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:42.427 11:00:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:42.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:42.998 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:42.998 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:42.998 11:00:11 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:12:42.998 11:00:11 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:12:42.998 11:00:11 -- setup/hugepages.sh@89 -- # local node 00:12:42.998 11:00:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:42.998 11:00:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:42.998 11:00:11 -- setup/hugepages.sh@92 -- # local surp 00:12:42.998 11:00:11 -- setup/hugepages.sh@93 -- # local resv 00:12:42.998 11:00:11 -- setup/hugepages.sh@94 -- # local anon 00:12:42.998 11:00:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:42.998 11:00:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:42.998 11:00:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:42.998 11:00:11 -- setup/common.sh@18 -- # local node= 00:12:42.998 11:00:11 -- setup/common.sh@19 -- # local var val 00:12:42.998 11:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:12:42.999 11:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:42.999 11:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:42.999 11:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:42.999 11:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:12:42.999 11:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7163212 kB' 'MemAvailable: 10530404 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 892456 kB' 'Inactive: 2799236 kB' 'Active(anon): 133212 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124280 kB' 'Mapped: 49148 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177280 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85980 kB' 'KernelStack: 6576 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # continue 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:42.999 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:42.999 11:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.000 11:00:11 -- setup/common.sh@33 -- # echo 0 00:12:43.000 11:00:11 -- setup/common.sh@33 -- # return 0 00:12:43.000 11:00:11 -- setup/hugepages.sh@97 -- # anon=0 00:12:43.000 11:00:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:43.000 11:00:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:43.000 11:00:11 -- setup/common.sh@18 -- # local node= 00:12:43.000 11:00:11 -- setup/common.sh@19 -- # local var val 00:12:43.000 11:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.000 11:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.000 11:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:43.000 11:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:43.000 11:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.000 11:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7163212 kB' 'MemAvailable: 10530404 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 892156 kB' 'Inactive: 2799236 kB' 'Active(anon): 132912 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124296 kB' 'Mapped: 49148 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177280 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85980 kB' 'KernelStack: 6560 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.000 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.000 11:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.001 11:00:11 -- setup/common.sh@33 -- # echo 0 00:12:43.001 11:00:11 -- setup/common.sh@33 -- # return 0 00:12:43.001 11:00:11 -- setup/hugepages.sh@99 -- # surp=0 00:12:43.001 11:00:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:43.001 11:00:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:43.001 11:00:11 -- setup/common.sh@18 -- # local node= 00:12:43.001 11:00:11 -- setup/common.sh@19 -- # local var val 00:12:43.001 11:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.001 11:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.001 11:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:43.001 11:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:43.001 11:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.001 11:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7163212 kB' 'MemAvailable: 10530404 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 891972 kB' 'Inactive: 2799236 kB' 'Active(anon): 132728 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124088 kB' 'Mapped: 49048 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177276 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85976 kB' 'KernelStack: 6576 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.001 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.001 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.002 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.002 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.003 11:00:11 -- setup/common.sh@33 -- # echo 0 00:12:43.003 11:00:11 -- setup/common.sh@33 -- # return 0 00:12:43.003 11:00:11 -- setup/hugepages.sh@100 -- # resv=0 00:12:43.003 nr_hugepages=512 00:12:43.003 11:00:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:43.003 resv_hugepages=0 00:12:43.003 11:00:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:43.003 surplus_hugepages=0 00:12:43.003 11:00:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:43.003 anon_hugepages=0 00:12:43.003 11:00:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:43.003 11:00:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:43.003 11:00:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:43.003 11:00:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:43.003 11:00:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:43.003 11:00:11 -- setup/common.sh@18 -- # local node= 00:12:43.003 11:00:11 -- setup/common.sh@19 -- # local var val 00:12:43.003 11:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.003 11:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.003 11:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:43.003 11:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:43.003 11:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.003 11:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7163212 kB' 'MemAvailable: 10530404 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 892228 kB' 'Inactive: 2799236 kB' 'Active(anon): 132984 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 49048 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177276 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85976 kB' 'KernelStack: 6576 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.003 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.003 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.004 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.004 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.004 11:00:11 -- setup/common.sh@33 -- # echo 512 00:12:43.004 11:00:11 -- setup/common.sh@33 -- # return 0 00:12:43.004 11:00:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:43.004 11:00:11 -- setup/hugepages.sh@112 -- # get_nodes 00:12:43.004 11:00:11 -- setup/hugepages.sh@27 -- # local node 00:12:43.004 11:00:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:43.004 11:00:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:43.004 11:00:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:43.004 11:00:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:43.004 11:00:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:43.004 11:00:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:43.004 11:00:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:43.004 11:00:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:43.004 11:00:11 -- setup/common.sh@18 -- # local node=0 00:12:43.004 11:00:11 -- setup/common.sh@19 -- # local var val 00:12:43.005 11:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.005 11:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.005 11:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:43.005 11:00:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:43.005 11:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.005 11:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7163212 kB' 'MemUsed: 5078760 kB' 'SwapCached: 0 kB' 'Active: 892196 kB' 'Inactive: 2799232 kB' 'Active(anon): 132952 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'FilePages: 3568944 kB' 'Mapped: 48988 kB' 'AnonPages: 124180 kB' 'Shmem: 10468 kB' 'KernelStack: 6624 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 177276 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.005 11:00:11 -- setup/common.sh@32 -- # continue 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.005 11:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.006 11:00:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.006 11:00:11 -- setup/common.sh@33 -- # echo 0 00:12:43.006 11:00:11 -- setup/common.sh@33 -- # return 0 00:12:43.006 11:00:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:43.006 11:00:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:43.006 11:00:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:43.006 11:00:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:43.006 node0=512 expecting 512 00:12:43.006 11:00:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:43.006 11:00:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:43.006 00:12:43.006 real 0m0.518s 00:12:43.006 user 0m0.265s 00:12:43.006 sys 0m0.287s 00:12:43.006 11:00:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:43.006 11:00:11 -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 ************************************ 00:12:43.006 END TEST custom_alloc 00:12:43.006 ************************************ 00:12:43.006 11:00:11 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:12:43.006 11:00:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:43.006 11:00:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.006 11:00:11 -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 ************************************ 00:12:43.264 START TEST no_shrink_alloc 00:12:43.264 ************************************ 00:12:43.264 11:00:11 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:12:43.264 11:00:11 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:12:43.264 11:00:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:12:43.264 11:00:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:43.264 11:00:11 -- setup/hugepages.sh@51 -- # shift 00:12:43.264 11:00:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:43.265 11:00:11 -- setup/hugepages.sh@52 -- # local node_ids 00:12:43.265 11:00:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:43.265 11:00:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:43.265 11:00:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:43.265 11:00:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:43.265 11:00:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:12:43.265 11:00:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:43.265 11:00:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:43.265 11:00:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:43.265 11:00:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:43.265 11:00:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:43.265 11:00:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:43.265 11:00:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:43.265 11:00:11 -- setup/hugepages.sh@73 -- # return 0 00:12:43.265 11:00:11 -- setup/hugepages.sh@198 -- # setup output 00:12:43.265 11:00:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:43.265 11:00:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:43.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:43.527 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:43.527 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:43.527 11:00:11 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:12:43.527 11:00:11 -- setup/hugepages.sh@89 -- # local node 00:12:43.527 11:00:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:43.527 11:00:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:43.527 11:00:11 -- setup/hugepages.sh@92 -- # local surp 00:12:43.527 11:00:11 -- setup/hugepages.sh@93 -- # local resv 00:12:43.527 11:00:11 -- setup/hugepages.sh@94 -- # local anon 00:12:43.527 11:00:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:43.527 11:00:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:43.527 11:00:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:43.527 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:43.527 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:43.527 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.527 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.527 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:43.527 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:43.527 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.527 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6114724 kB' 'MemAvailable: 9481916 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 892480 kB' 'Inactive: 2799236 kB' 'Active(anon): 133236 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 124308 kB' 'Mapped: 49456 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177284 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85984 kB' 'KernelStack: 6584 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.527 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.527 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:43.528 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:43.528 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:43.528 11:00:12 -- setup/hugepages.sh@97 -- # anon=0 00:12:43.528 11:00:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:43.528 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:43.528 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:43.528 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:43.528 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.528 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.528 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:43.528 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:43.528 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.528 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6114976 kB' 'MemAvailable: 9482168 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 887712 kB' 'Inactive: 2799236 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 119552 kB' 'Mapped: 48660 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177264 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85964 kB' 'KernelStack: 6520 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.528 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.528 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.529 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.529 11:00:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.530 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:43.530 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:43.530 11:00:12 -- setup/hugepages.sh@99 -- # surp=0 00:12:43.530 11:00:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:43.530 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:43.530 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:43.530 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:43.530 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.530 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.530 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:43.530 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:43.530 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.530 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6114976 kB' 'MemAvailable: 9482168 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 887220 kB' 'Inactive: 2799236 kB' 'Active(anon): 127976 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 119092 kB' 'Mapped: 48440 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177192 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85892 kB' 'KernelStack: 6488 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.530 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.530 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:43.531 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:43.531 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:43.531 11:00:12 -- setup/hugepages.sh@100 -- # resv=0 00:12:43.531 nr_hugepages=1024 00:12:43.531 11:00:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:43.531 resv_hugepages=0 00:12:43.531 11:00:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:43.531 surplus_hugepages=0 00:12:43.531 11:00:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:43.531 anon_hugepages=0 00:12:43.531 11:00:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:43.531 11:00:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:43.531 11:00:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:43.531 11:00:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:43.531 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:43.531 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:43.531 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:43.531 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.531 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.531 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:43.531 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:43.531 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.531 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6114976 kB' 'MemAvailable: 9482168 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 887424 kB' 'Inactive: 2799236 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 119300 kB' 'Mapped: 48440 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177192 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85892 kB' 'KernelStack: 6472 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.531 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.531 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.532 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.532 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:43.533 11:00:12 -- setup/common.sh@33 -- # echo 1024 00:12:43.533 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:43.533 11:00:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:43.533 11:00:12 -- setup/hugepages.sh@112 -- # get_nodes 00:12:43.533 11:00:12 -- setup/hugepages.sh@27 -- # local node 00:12:43.533 11:00:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:43.533 11:00:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:43.533 11:00:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:43.533 11:00:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:43.533 11:00:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:43.533 11:00:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:43.533 11:00:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:43.533 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:43.533 11:00:12 -- setup/common.sh@18 -- # local node=0 00:12:43.533 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:43.533 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:43.533 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:43.533 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:43.533 11:00:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:43.533 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:43.533 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6114976 kB' 'MemUsed: 6126996 kB' 'SwapCached: 0 kB' 'Active: 887144 kB' 'Inactive: 2799236 kB' 'Active(anon): 127900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'FilePages: 3568948 kB' 'Mapped: 48320 kB' 'AnonPages: 119048 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 177172 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.533 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.533 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # continue 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:43.534 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:43.534 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:43.534 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:43.534 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:43.534 11:00:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:43.534 11:00:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:43.534 11:00:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:43.534 11:00:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:43.534 node0=1024 expecting 1024 00:12:43.534 11:00:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:43.534 11:00:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:43.534 11:00:12 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:12:43.534 11:00:12 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:12:43.534 11:00:12 -- setup/hugepages.sh@202 -- # setup output 00:12:43.534 11:00:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:43.534 11:00:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:43.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:44.112 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:44.112 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:44.112 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:12:44.112 11:00:12 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:12:44.112 11:00:12 -- setup/hugepages.sh@89 -- # local node 00:12:44.112 11:00:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:12:44.112 11:00:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:12:44.112 11:00:12 -- setup/hugepages.sh@92 -- # local surp 00:12:44.112 11:00:12 -- setup/hugepages.sh@93 -- # local resv 00:12:44.112 11:00:12 -- setup/hugepages.sh@94 -- # local anon 00:12:44.112 11:00:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:44.112 11:00:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:44.112 11:00:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:44.112 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:44.112 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:44.112 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:44.112 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:44.112 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:44.112 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:44.112 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:44.112 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:44.112 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.112 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6112460 kB' 'MemAvailable: 9479652 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 887632 kB' 'Inactive: 2799236 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 119276 kB' 'Mapped: 48652 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177120 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85820 kB' 'KernelStack: 6548 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:44.112 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.112 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.112 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.112 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.112 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.112 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.112 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.112 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.113 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.113 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:44.114 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:44.114 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:44.114 11:00:12 -- setup/hugepages.sh@97 -- # anon=0 00:12:44.114 11:00:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:44.114 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:44.114 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:44.114 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:44.114 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:44.114 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:44.114 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:44.114 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:44.114 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:44.114 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6112804 kB' 'MemAvailable: 9479992 kB' 'Buffers: 3456 kB' 'Cached: 3565488 kB' 'SwapCached: 0 kB' 'Active: 887272 kB' 'Inactive: 2799232 kB' 'Active(anon): 128028 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 119232 kB' 'Mapped: 48328 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177120 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85820 kB' 'KernelStack: 6512 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.114 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.114 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.115 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:44.115 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:44.115 11:00:12 -- setup/hugepages.sh@99 -- # surp=0 00:12:44.115 11:00:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:44.115 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:44.115 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:44.115 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:44.115 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:44.115 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:44.115 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:44.115 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:44.115 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:44.115 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6112804 kB' 'MemAvailable: 9479996 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 887180 kB' 'Inactive: 2799236 kB' 'Active(anon): 127936 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 119360 kB' 'Mapped: 48320 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177120 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85820 kB' 'KernelStack: 6480 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.115 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.115 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.116 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.116 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:44.117 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:44.117 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:44.117 11:00:12 -- setup/hugepages.sh@100 -- # resv=0 00:12:44.117 11:00:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:44.117 nr_hugepages=1024 00:12:44.117 resv_hugepages=0 00:12:44.117 11:00:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:44.117 surplus_hugepages=0 00:12:44.117 11:00:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:44.117 anon_hugepages=0 00:12:44.117 11:00:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:44.117 11:00:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:44.117 11:00:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:44.117 11:00:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:44.117 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:44.117 11:00:12 -- setup/common.sh@18 -- # local node= 00:12:44.117 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:44.117 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:44.117 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:44.117 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:44.117 11:00:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:44.117 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:44.117 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6113324 kB' 'MemAvailable: 9480516 kB' 'Buffers: 3456 kB' 'Cached: 3565492 kB' 'SwapCached: 0 kB' 'Active: 887348 kB' 'Inactive: 2799236 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'AnonPages: 119248 kB' 'Mapped: 48320 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 177120 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85820 kB' 'KernelStack: 6480 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.117 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.117 11:00:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.118 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:44.118 11:00:12 -- setup/common.sh@33 -- # echo 1024 00:12:44.118 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:44.118 11:00:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:44.118 11:00:12 -- setup/hugepages.sh@112 -- # get_nodes 00:12:44.118 11:00:12 -- setup/hugepages.sh@27 -- # local node 00:12:44.118 11:00:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:44.118 11:00:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:44.118 11:00:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:44.118 11:00:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:44.118 11:00:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:44.118 11:00:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:44.118 11:00:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:44.118 11:00:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:44.118 11:00:12 -- setup/common.sh@18 -- # local node=0 00:12:44.118 11:00:12 -- setup/common.sh@19 -- # local var val 00:12:44.118 11:00:12 -- setup/common.sh@20 -- # local mem_f mem 00:12:44.118 11:00:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:44.118 11:00:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:44.118 11:00:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:44.118 11:00:12 -- setup/common.sh@28 -- # mapfile -t mem 00:12:44.118 11:00:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.118 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6113352 kB' 'MemUsed: 6128620 kB' 'SwapCached: 0 kB' 'Active: 887364 kB' 'Inactive: 2799236 kB' 'Active(anon): 128120 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2799236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1604 kB' 'Writeback: 0 kB' 'FilePages: 3568948 kB' 'Mapped: 48320 kB' 'AnonPages: 119316 kB' 'Shmem: 10468 kB' 'KernelStack: 6480 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 177120 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 85820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # continue 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # IFS=': ' 00:12:44.119 11:00:12 -- setup/common.sh@31 -- # read -r var val _ 00:12:44.119 11:00:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:44.119 11:00:12 -- setup/common.sh@33 -- # echo 0 00:12:44.119 11:00:12 -- setup/common.sh@33 -- # return 0 00:12:44.119 11:00:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:44.119 11:00:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:44.119 11:00:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:44.119 11:00:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:44.119 node0=1024 expecting 1024 00:12:44.119 11:00:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:44.119 11:00:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:44.119 00:12:44.119 real 0m0.996s 00:12:44.119 user 0m0.515s 00:12:44.119 sys 0m0.545s 00:12:44.119 11:00:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.119 11:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:44.119 ************************************ 00:12:44.119 END TEST no_shrink_alloc 00:12:44.119 ************************************ 00:12:44.119 11:00:12 -- setup/hugepages.sh@217 -- # clear_hp 00:12:44.119 11:00:12 -- setup/hugepages.sh@37 -- # local node hp 00:12:44.119 11:00:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:44.119 11:00:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:44.120 11:00:12 -- setup/hugepages.sh@41 -- # echo 0 00:12:44.120 11:00:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:44.120 11:00:12 -- setup/hugepages.sh@41 -- # echo 0 00:12:44.120 11:00:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:44.120 11:00:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:44.120 00:12:44.120 real 0m4.846s 00:12:44.120 user 0m2.306s 00:12:44.120 sys 0m2.594s 00:12:44.120 11:00:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.120 11:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:44.120 ************************************ 00:12:44.120 END TEST hugepages 00:12:44.120 ************************************ 00:12:44.120 11:00:12 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:44.120 11:00:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:44.120 11:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.120 11:00:12 -- common/autotest_common.sh@10 -- # set +x 00:12:44.379 ************************************ 00:12:44.379 START TEST driver 00:12:44.379 ************************************ 00:12:44.379 11:00:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:44.379 * Looking for test storage... 00:12:44.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:44.379 11:00:12 -- setup/driver.sh@68 -- # setup reset 00:12:44.379 11:00:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:44.379 11:00:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:44.947 11:00:13 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:12:44.947 11:00:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:44.947 11:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.947 11:00:13 -- common/autotest_common.sh@10 -- # set +x 00:12:44.947 ************************************ 00:12:44.947 START TEST guess_driver 00:12:44.947 ************************************ 00:12:44.947 11:00:13 -- common/autotest_common.sh@1111 -- # guess_driver 00:12:44.947 11:00:13 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:12:44.947 11:00:13 -- setup/driver.sh@47 -- # local fail=0 00:12:44.947 11:00:13 -- setup/driver.sh@49 -- # pick_driver 00:12:44.947 11:00:13 -- setup/driver.sh@36 -- # vfio 00:12:44.947 11:00:13 -- setup/driver.sh@21 -- # local iommu_grups 00:12:44.947 11:00:13 -- setup/driver.sh@22 -- # local unsafe_vfio 00:12:44.947 11:00:13 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:12:44.947 11:00:13 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:12:44.947 11:00:13 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:12:44.947 11:00:13 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:12:44.947 11:00:13 -- setup/driver.sh@32 -- # return 1 00:12:44.947 11:00:13 -- setup/driver.sh@38 -- # uio 00:12:44.947 11:00:13 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:12:44.947 11:00:13 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:12:44.947 11:00:13 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:12:44.947 11:00:13 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:12:44.947 11:00:13 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:12:44.947 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:12:44.947 11:00:13 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:12:44.947 11:00:13 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:12:44.947 11:00:13 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:12:44.947 Looking for driver=uio_pci_generic 00:12:44.947 11:00:13 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:12:44.947 11:00:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:44.947 11:00:13 -- setup/driver.sh@45 -- # setup output config 00:12:44.947 11:00:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:44.947 11:00:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:45.900 11:00:14 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:12:45.900 11:00:14 -- setup/driver.sh@58 -- # continue 00:12:45.900 11:00:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:45.900 11:00:14 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:45.900 11:00:14 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:45.900 11:00:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:45.900 11:00:14 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:45.900 11:00:14 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:45.900 11:00:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:45.900 11:00:14 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:12:45.900 11:00:14 -- setup/driver.sh@65 -- # setup reset 00:12:45.900 11:00:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:45.900 11:00:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:46.466 00:12:46.466 real 0m1.388s 00:12:46.466 user 0m0.559s 00:12:46.466 sys 0m0.808s 00:12:46.466 11:00:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:46.466 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:12:46.466 ************************************ 00:12:46.466 END TEST guess_driver 00:12:46.466 ************************************ 00:12:46.466 00:12:46.466 real 0m2.142s 00:12:46.466 user 0m0.807s 00:12:46.466 sys 0m1.351s 00:12:46.466 11:00:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:46.466 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:12:46.466 ************************************ 00:12:46.466 END TEST driver 00:12:46.466 ************************************ 00:12:46.466 11:00:14 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:46.466 11:00:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:46.466 11:00:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.466 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:12:46.466 ************************************ 00:12:46.466 START TEST devices 00:12:46.466 ************************************ 00:12:46.466 11:00:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:46.724 * Looking for test storage... 00:12:46.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:46.724 11:00:15 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:12:46.724 11:00:15 -- setup/devices.sh@192 -- # setup reset 00:12:46.724 11:00:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:46.724 11:00:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:47.342 11:00:15 -- setup/devices.sh@194 -- # get_zoned_devs 00:12:47.342 11:00:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:47.342 11:00:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:47.342 11:00:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:47.342 11:00:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.342 11:00:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:47.342 11:00:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:47.342 11:00:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:47.342 11:00:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.342 11:00:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.342 11:00:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:12:47.342 11:00:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:12:47.342 11:00:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:12:47.342 11:00:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.342 11:00:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.342 11:00:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:12:47.342 11:00:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:12:47.342 11:00:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:12:47.342 11:00:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.342 11:00:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.342 11:00:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:47.343 11:00:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:47.343 11:00:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:47.343 11:00:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.343 11:00:15 -- setup/devices.sh@196 -- # blocks=() 00:12:47.343 11:00:15 -- setup/devices.sh@196 -- # declare -a blocks 00:12:47.343 11:00:15 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:12:47.343 11:00:15 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:12:47.343 11:00:15 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:12:47.343 11:00:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:47.343 11:00:15 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:12:47.343 11:00:15 -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:47.343 11:00:15 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:47.343 11:00:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:47.343 11:00:15 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:12:47.343 11:00:15 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:12:47.343 11:00:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:12:47.343 No valid GPT data, bailing 00:12:47.343 11:00:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:47.343 11:00:15 -- scripts/common.sh@391 -- # pt= 00:12:47.343 11:00:15 -- scripts/common.sh@392 -- # return 1 00:12:47.343 11:00:15 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:12:47.343 11:00:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:47.343 11:00:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:47.343 11:00:15 -- setup/common.sh@80 -- # echo 4294967296 00:12:47.343 11:00:15 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:47.343 11:00:15 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:47.343 11:00:15 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:47.343 11:00:15 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:47.343 11:00:15 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:12:47.343 11:00:15 -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:47.343 11:00:15 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:47.343 11:00:15 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:47.343 11:00:15 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:12:47.343 11:00:15 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:12:47.343 11:00:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:12:47.616 No valid GPT data, bailing 00:12:47.616 11:00:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:12:47.616 11:00:16 -- scripts/common.sh@391 -- # pt= 00:12:47.616 11:00:16 -- scripts/common.sh@392 -- # return 1 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:12:47.616 11:00:16 -- setup/common.sh@76 -- # local dev=nvme0n2 00:12:47.616 11:00:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:12:47.616 11:00:16 -- setup/common.sh@80 -- # echo 4294967296 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:47.616 11:00:16 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:47.616 11:00:16 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:47.616 11:00:16 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:47.616 11:00:16 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:12:47.616 11:00:16 -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:47.616 11:00:16 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:47.616 11:00:16 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:12:47.616 11:00:16 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:12:47.616 11:00:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:12:47.616 No valid GPT data, bailing 00:12:47.616 11:00:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:12:47.616 11:00:16 -- scripts/common.sh@391 -- # pt= 00:12:47.616 11:00:16 -- scripts/common.sh@392 -- # return 1 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:12:47.616 11:00:16 -- setup/common.sh@76 -- # local dev=nvme0n3 00:12:47.616 11:00:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:12:47.616 11:00:16 -- setup/common.sh@80 -- # echo 4294967296 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:47.616 11:00:16 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:47.616 11:00:16 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:47.616 11:00:16 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:47.616 11:00:16 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:12:47.616 11:00:16 -- setup/devices.sh@201 -- # ctrl=nvme1 00:12:47.616 11:00:16 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:12:47.616 11:00:16 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:12:47.616 11:00:16 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:12:47.616 11:00:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:12:47.616 No valid GPT data, bailing 00:12:47.616 11:00:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:47.616 11:00:16 -- scripts/common.sh@391 -- # pt= 00:12:47.616 11:00:16 -- scripts/common.sh@392 -- # return 1 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:12:47.616 11:00:16 -- setup/common.sh@76 -- # local dev=nvme1n1 00:12:47.616 11:00:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:12:47.616 11:00:16 -- setup/common.sh@80 -- # echo 5368709120 00:12:47.616 11:00:16 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:12:47.616 11:00:16 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:47.616 11:00:16 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:12:47.616 11:00:16 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:12:47.616 11:00:16 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:12:47.616 11:00:16 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:12:47.616 11:00:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:47.616 11:00:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:47.616 11:00:16 -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 ************************************ 00:12:47.874 START TEST nvme_mount 00:12:47.874 ************************************ 00:12:47.874 11:00:16 -- common/autotest_common.sh@1111 -- # nvme_mount 00:12:47.874 11:00:16 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:12:47.874 11:00:16 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:12:47.874 11:00:16 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:47.874 11:00:16 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:47.874 11:00:16 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:12:47.874 11:00:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:47.874 11:00:16 -- setup/common.sh@40 -- # local part_no=1 00:12:47.874 11:00:16 -- setup/common.sh@41 -- # local size=1073741824 00:12:47.874 11:00:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:47.874 11:00:16 -- setup/common.sh@44 -- # parts=() 00:12:47.874 11:00:16 -- setup/common.sh@44 -- # local parts 00:12:47.874 11:00:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:12:47.874 11:00:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:47.874 11:00:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:47.874 11:00:16 -- setup/common.sh@46 -- # (( part++ )) 00:12:47.874 11:00:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:47.874 11:00:16 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:47.874 11:00:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:47.874 11:00:16 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:12:48.810 Creating new GPT entries in memory. 00:12:48.810 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:48.810 other utilities. 00:12:48.810 11:00:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:12:48.810 11:00:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:48.810 11:00:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:48.810 11:00:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:48.810 11:00:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:49.744 Creating new GPT entries in memory. 00:12:49.744 The operation has completed successfully. 00:12:49.744 11:00:18 -- setup/common.sh@57 -- # (( part++ )) 00:12:49.744 11:00:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:49.744 11:00:18 -- setup/common.sh@62 -- # wait 71362 00:12:49.744 11:00:18 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:49.744 11:00:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:12:49.744 11:00:18 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:49.744 11:00:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:12:49.744 11:00:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:12:49.744 11:00:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.003 11:00:18 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:50.003 11:00:18 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:50.003 11:00:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:12:50.003 11:00:18 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.003 11:00:18 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:50.003 11:00:18 -- setup/devices.sh@53 -- # local found=0 00:12:50.003 11:00:18 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:50.003 11:00:18 -- setup/devices.sh@56 -- # : 00:12:50.003 11:00:18 -- setup/devices.sh@59 -- # local pci status 00:12:50.003 11:00:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.003 11:00:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:50.003 11:00:18 -- setup/devices.sh@47 -- # setup output config 00:12:50.003 11:00:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:50.003 11:00:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:50.003 11:00:18 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.003 11:00:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:12:50.003 11:00:18 -- setup/devices.sh@63 -- # found=1 00:12:50.003 11:00:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.003 11:00:18 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.003 11:00:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.263 11:00:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.263 11:00:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.263 11:00:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.263 11:00:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.263 11:00:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:50.263 11:00:18 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:50.263 11:00:18 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.263 11:00:18 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:50.263 11:00:18 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:50.263 11:00:18 -- setup/devices.sh@110 -- # cleanup_nvme 00:12:50.263 11:00:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.263 11:00:18 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.263 11:00:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:50.263 11:00:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:50.263 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:50.263 11:00:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:50.263 11:00:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:50.522 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:50.522 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:50.522 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:50.522 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:50.522 11:00:19 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:12:50.522 11:00:19 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:12:50.522 11:00:19 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.522 11:00:19 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:12:50.522 11:00:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:12:50.791 11:00:19 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.791 11:00:19 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:50.791 11:00:19 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:50.791 11:00:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:12:50.791 11:00:19 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:50.791 11:00:19 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:50.791 11:00:19 -- setup/devices.sh@53 -- # local found=0 00:12:50.791 11:00:19 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:50.791 11:00:19 -- setup/devices.sh@56 -- # : 00:12:50.791 11:00:19 -- setup/devices.sh@59 -- # local pci status 00:12:50.791 11:00:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:50.791 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.791 11:00:19 -- setup/devices.sh@47 -- # setup output config 00:12:50.791 11:00:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:50.791 11:00:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:50.791 11:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.791 11:00:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:12:50.791 11:00:19 -- setup/devices.sh@63 -- # found=1 00:12:50.791 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.791 11:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.791 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.050 11:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.050 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.050 11:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.050 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.050 11:00:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:51.050 11:00:19 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:51.050 11:00:19 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:51.050 11:00:19 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:51.050 11:00:19 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:51.050 11:00:19 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:51.051 11:00:19 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:12:51.051 11:00:19 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:51.051 11:00:19 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:12:51.051 11:00:19 -- setup/devices.sh@50 -- # local mount_point= 00:12:51.051 11:00:19 -- setup/devices.sh@51 -- # local test_file= 00:12:51.051 11:00:19 -- setup/devices.sh@53 -- # local found=0 00:12:51.051 11:00:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:51.051 11:00:19 -- setup/devices.sh@59 -- # local pci status 00:12:51.051 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.051 11:00:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:51.051 11:00:19 -- setup/devices.sh@47 -- # setup output config 00:12:51.051 11:00:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:51.051 11:00:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:51.309 11:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.309 11:00:19 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:12:51.309 11:00:19 -- setup/devices.sh@63 -- # found=1 00:12:51.309 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.309 11:00:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.309 11:00:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.567 11:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.567 11:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.567 11:00:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.567 11:00:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.824 11:00:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:51.824 11:00:20 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:51.824 11:00:20 -- setup/devices.sh@68 -- # return 0 00:12:51.824 11:00:20 -- setup/devices.sh@128 -- # cleanup_nvme 00:12:51.824 11:00:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:51.824 11:00:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:51.824 11:00:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:51.824 11:00:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:51.824 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:51.824 00:12:51.824 real 0m3.945s 00:12:51.824 user 0m0.676s 00:12:51.824 sys 0m1.009s 00:12:51.824 11:00:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:51.824 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:12:51.824 ************************************ 00:12:51.824 END TEST nvme_mount 00:12:51.824 ************************************ 00:12:51.824 11:00:20 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:12:51.824 11:00:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:51.824 11:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.825 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:12:51.825 ************************************ 00:12:51.825 START TEST dm_mount 00:12:51.825 ************************************ 00:12:51.825 11:00:20 -- common/autotest_common.sh@1111 -- # dm_mount 00:12:51.825 11:00:20 -- setup/devices.sh@144 -- # pv=nvme0n1 00:12:51.825 11:00:20 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:12:51.825 11:00:20 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:12:51.825 11:00:20 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:12:51.825 11:00:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:51.825 11:00:20 -- setup/common.sh@40 -- # local part_no=2 00:12:51.825 11:00:20 -- setup/common.sh@41 -- # local size=1073741824 00:12:51.825 11:00:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:51.825 11:00:20 -- setup/common.sh@44 -- # parts=() 00:12:51.825 11:00:20 -- setup/common.sh@44 -- # local parts 00:12:51.825 11:00:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:12:51.825 11:00:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:51.825 11:00:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:51.825 11:00:20 -- setup/common.sh@46 -- # (( part++ )) 00:12:51.825 11:00:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:51.825 11:00:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:51.825 11:00:20 -- setup/common.sh@46 -- # (( part++ )) 00:12:51.825 11:00:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:51.825 11:00:20 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:51.825 11:00:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:51.825 11:00:20 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:12:52.764 Creating new GPT entries in memory. 00:12:52.764 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:52.764 other utilities. 00:12:52.764 11:00:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:12:52.764 11:00:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:52.764 11:00:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:52.764 11:00:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:52.764 11:00:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:54.140 Creating new GPT entries in memory. 00:12:54.140 The operation has completed successfully. 00:12:54.140 11:00:22 -- setup/common.sh@57 -- # (( part++ )) 00:12:54.140 11:00:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:54.140 11:00:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:54.140 11:00:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:54.140 11:00:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:12:55.072 The operation has completed successfully. 00:12:55.072 11:00:23 -- setup/common.sh@57 -- # (( part++ )) 00:12:55.072 11:00:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:55.072 11:00:23 -- setup/common.sh@62 -- # wait 71799 00:12:55.072 11:00:23 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:55.072 11:00:23 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.072 11:00:23 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:55.072 11:00:23 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:55.072 11:00:23 -- setup/devices.sh@160 -- # for t in {1..5} 00:12:55.072 11:00:23 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:55.072 11:00:23 -- setup/devices.sh@161 -- # break 00:12:55.072 11:00:23 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:55.072 11:00:23 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:55.072 11:00:23 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:55.072 11:00:23 -- setup/devices.sh@166 -- # dm=dm-0 00:12:55.072 11:00:23 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:55.072 11:00:23 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:55.072 11:00:23 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.072 11:00:23 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:12:55.072 11:00:23 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.073 11:00:23 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:55.073 11:00:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:55.073 11:00:23 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.073 11:00:23 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:55.073 11:00:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:55.073 11:00:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:55.073 11:00:23 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.073 11:00:23 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:55.073 11:00:23 -- setup/devices.sh@53 -- # local found=0 00:12:55.073 11:00:23 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:55.073 11:00:23 -- setup/devices.sh@56 -- # : 00:12:55.073 11:00:23 -- setup/devices.sh@59 -- # local pci status 00:12:55.073 11:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.073 11:00:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:55.073 11:00:23 -- setup/devices.sh@47 -- # setup output config 00:12:55.073 11:00:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:55.073 11:00:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:55.073 11:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.073 11:00:23 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:55.073 11:00:23 -- setup/devices.sh@63 -- # found=1 00:12:55.073 11:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.073 11:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.073 11:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.332 11:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.332 11:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.332 11:00:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.332 11:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.332 11:00:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:55.332 11:00:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:12:55.332 11:00:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.590 11:00:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:55.590 11:00:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:55.590 11:00:23 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.590 11:00:23 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:55.590 11:00:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:55.590 11:00:23 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:55.590 11:00:23 -- setup/devices.sh@50 -- # local mount_point= 00:12:55.590 11:00:23 -- setup/devices.sh@51 -- # local test_file= 00:12:55.590 11:00:23 -- setup/devices.sh@53 -- # local found=0 00:12:55.590 11:00:23 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:55.590 11:00:23 -- setup/devices.sh@59 -- # local pci status 00:12:55.590 11:00:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.590 11:00:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:55.590 11:00:23 -- setup/devices.sh@47 -- # setup output config 00:12:55.590 11:00:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:12:55.590 11:00:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:55.590 11:00:24 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.590 11:00:24 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:55.590 11:00:24 -- setup/devices.sh@63 -- # found=1 00:12:55.590 11:00:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.590 11:00:24 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.590 11:00:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.847 11:00:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.847 11:00:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.847 11:00:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.847 11:00:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:55.847 11:00:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:55.847 11:00:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:55.847 11:00:24 -- setup/devices.sh@68 -- # return 0 00:12:55.847 11:00:24 -- setup/devices.sh@187 -- # cleanup_dm 00:12:55.847 11:00:24 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:55.847 11:00:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:55.847 11:00:24 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:56.105 11:00:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:56.105 11:00:24 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:56.105 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:56.105 11:00:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:56.105 11:00:24 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:56.105 00:12:56.105 real 0m4.188s 00:12:56.105 user 0m0.469s 00:12:56.105 sys 0m0.689s 00:12:56.105 11:00:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:56.105 11:00:24 -- common/autotest_common.sh@10 -- # set +x 00:12:56.105 ************************************ 00:12:56.105 END TEST dm_mount 00:12:56.105 ************************************ 00:12:56.105 11:00:24 -- setup/devices.sh@1 -- # cleanup 00:12:56.105 11:00:24 -- setup/devices.sh@11 -- # cleanup_nvme 00:12:56.105 11:00:24 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:56.105 11:00:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:56.105 11:00:24 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:56.105 11:00:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:56.105 11:00:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:56.363 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:56.363 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:56.363 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:56.363 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:56.363 11:00:24 -- setup/devices.sh@12 -- # cleanup_dm 00:12:56.363 11:00:24 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:56.363 11:00:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:56.363 11:00:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:56.363 11:00:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:56.363 11:00:24 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:56.363 11:00:24 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:56.363 00:12:56.363 real 0m9.793s 00:12:56.363 user 0m1.869s 00:12:56.363 sys 0m2.317s 00:12:56.363 11:00:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:56.363 11:00:24 -- common/autotest_common.sh@10 -- # set +x 00:12:56.363 ************************************ 00:12:56.363 END TEST devices 00:12:56.363 ************************************ 00:12:56.363 00:12:56.363 real 0m22.128s 00:12:56.363 user 0m7.238s 00:12:56.363 sys 0m9.196s 00:12:56.363 11:00:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:56.363 11:00:24 -- common/autotest_common.sh@10 -- # set +x 00:12:56.363 ************************************ 00:12:56.363 END TEST setup.sh 00:12:56.363 ************************************ 00:12:56.363 11:00:24 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:56.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:56.929 Hugepages 00:12:56.929 node hugesize free / total 00:12:56.929 node0 1048576kB 0 / 0 00:12:56.929 node0 2048kB 2048 / 2048 00:12:56.929 00:12:56.929 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:57.186 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:57.186 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:12:57.186 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:12:57.186 11:00:25 -- spdk/autotest.sh@130 -- # uname -s 00:12:57.186 11:00:25 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:12:57.186 11:00:25 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:12:57.186 11:00:25 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:58.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:58.120 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:58.120 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:58.120 11:00:26 -- common/autotest_common.sh@1518 -- # sleep 1 00:12:59.054 11:00:27 -- common/autotest_common.sh@1519 -- # bdfs=() 00:12:59.054 11:00:27 -- common/autotest_common.sh@1519 -- # local bdfs 00:12:59.054 11:00:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:12:59.054 11:00:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:12:59.054 11:00:27 -- common/autotest_common.sh@1499 -- # bdfs=() 00:12:59.054 11:00:27 -- common/autotest_common.sh@1499 -- # local bdfs 00:12:59.054 11:00:27 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:59.054 11:00:27 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:59.054 11:00:27 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:12:59.311 11:00:27 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:12:59.311 11:00:27 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:12:59.311 11:00:27 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:59.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:59.568 Waiting for block devices as requested 00:12:59.568 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:59.568 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:59.826 11:00:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:12:59.826 11:00:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:12:59.826 11:00:28 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:12:59.826 11:00:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:12:59.826 11:00:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:12:59.826 11:00:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1543 -- # continue 00:12:59.826 11:00:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:12:59.826 11:00:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:12:59.826 11:00:28 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:12:59.826 11:00:28 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:12:59.826 11:00:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:12:59.826 11:00:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:12:59.826 11:00:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:12:59.826 11:00:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:12:59.826 11:00:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:12:59.826 11:00:28 -- common/autotest_common.sh@1543 -- # continue 00:12:59.826 11:00:28 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:12:59.826 11:00:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:59.826 11:00:28 -- common/autotest_common.sh@10 -- # set +x 00:12:59.826 11:00:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:12:59.826 11:00:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:59.826 11:00:28 -- common/autotest_common.sh@10 -- # set +x 00:12:59.826 11:00:28 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:00.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:00.648 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.648 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:00.648 11:00:29 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:13:00.648 11:00:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:00.648 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:13:00.648 11:00:29 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:13:00.648 11:00:29 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:13:00.648 11:00:29 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:13:00.648 11:00:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:13:00.648 11:00:29 -- common/autotest_common.sh@1563 -- # local bdfs 00:13:00.648 11:00:29 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:13:00.648 11:00:29 -- common/autotest_common.sh@1499 -- # bdfs=() 00:13:00.648 11:00:29 -- common/autotest_common.sh@1499 -- # local bdfs 00:13:00.648 11:00:29 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:00.648 11:00:29 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:00.648 11:00:29 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:13:00.907 11:00:29 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:13:00.907 11:00:29 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:00.907 11:00:29 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:13:00.907 11:00:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:13:00.907 11:00:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:13:00.907 11:00:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:00.907 11:00:29 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:13:00.907 11:00:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:13:00.907 11:00:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:13:00.907 11:00:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:00.907 11:00:29 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:13:00.907 11:00:29 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:13:00.907 11:00:29 -- common/autotest_common.sh@1579 -- # return 0 00:13:00.907 11:00:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:13:00.907 11:00:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:13:00.907 11:00:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:00.907 11:00:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:00.907 11:00:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:13:00.907 11:00:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:00.907 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:13:00.907 11:00:29 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:00.907 11:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:00.907 11:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.907 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:13:00.907 ************************************ 00:13:00.907 START TEST env 00:13:00.907 ************************************ 00:13:00.907 11:00:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:00.907 * Looking for test storage... 00:13:00.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:13:00.907 11:00:29 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:00.907 11:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:00.907 11:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.907 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:13:01.165 ************************************ 00:13:01.165 START TEST env_memory 00:13:01.165 ************************************ 00:13:01.165 11:00:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:01.165 00:13:01.165 00:13:01.165 CUnit - A unit testing framework for C - Version 2.1-3 00:13:01.165 http://cunit.sourceforge.net/ 00:13:01.165 00:13:01.165 00:13:01.165 Suite: memory 00:13:01.165 Test: alloc and free memory map ...[2024-04-18 11:00:29.625851] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:13:01.165 passed 00:13:01.165 Test: mem map translation ...[2024-04-18 11:00:29.650614] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:13:01.165 [2024-04-18 11:00:29.650661] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:13:01.165 [2024-04-18 11:00:29.650705] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:13:01.165 [2024-04-18 11:00:29.650714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:13:01.165 passed 00:13:01.165 Test: mem map registration ...[2024-04-18 11:00:29.700788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:13:01.165 [2024-04-18 11:00:29.700830] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:13:01.165 passed 00:13:01.165 Test: mem map adjacent registrations ...passed 00:13:01.165 00:13:01.165 Run Summary: Type Total Ran Passed Failed Inactive 00:13:01.165 suites 1 1 n/a 0 0 00:13:01.165 tests 4 4 4 0 0 00:13:01.165 asserts 152 152 152 0 n/a 00:13:01.165 00:13:01.165 Elapsed time = 0.170 seconds 00:13:01.165 00:13:01.165 real 0m0.184s 00:13:01.165 user 0m0.172s 00:13:01.165 sys 0m0.012s 00:13:01.165 11:00:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:01.165 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:13:01.165 ************************************ 00:13:01.165 END TEST env_memory 00:13:01.165 ************************************ 00:13:01.425 11:00:29 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:01.425 11:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.425 11:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.425 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:13:01.425 ************************************ 00:13:01.425 START TEST env_vtophys 00:13:01.425 ************************************ 00:13:01.425 11:00:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:01.425 EAL: lib.eal log level changed from notice to debug 00:13:01.425 EAL: Detected lcore 0 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 1 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 2 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 3 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 4 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 5 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 6 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 7 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 8 as core 0 on socket 0 00:13:01.425 EAL: Detected lcore 9 as core 0 on socket 0 00:13:01.425 EAL: Maximum logical cores by configuration: 128 00:13:01.425 EAL: Detected CPU lcores: 10 00:13:01.425 EAL: Detected NUMA nodes: 1 00:13:01.425 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:13:01.425 EAL: Detected shared linkage of DPDK 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:13:01.425 EAL: Registered [vdev] bus. 00:13:01.425 EAL: bus.vdev log level changed from disabled to notice 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:13:01.425 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:13:01.425 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:13:01.425 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:13:01.425 EAL: No shared files mode enabled, IPC will be disabled 00:13:01.425 EAL: No shared files mode enabled, IPC is disabled 00:13:01.425 EAL: Selected IOVA mode 'PA' 00:13:01.425 EAL: Probing VFIO support... 00:13:01.425 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:01.425 EAL: VFIO modules not loaded, skipping VFIO support... 00:13:01.425 EAL: Ask a virtual area of 0x2e000 bytes 00:13:01.425 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:13:01.425 EAL: Setting up physically contiguous memory... 00:13:01.425 EAL: Setting maximum number of open files to 524288 00:13:01.425 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:13:01.425 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:13:01.425 EAL: Ask a virtual area of 0x61000 bytes 00:13:01.425 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:13:01.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:01.425 EAL: Ask a virtual area of 0x400000000 bytes 00:13:01.425 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:13:01.425 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:13:01.425 EAL: Ask a virtual area of 0x61000 bytes 00:13:01.425 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:13:01.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:01.425 EAL: Ask a virtual area of 0x400000000 bytes 00:13:01.425 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:13:01.425 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:13:01.425 EAL: Ask a virtual area of 0x61000 bytes 00:13:01.425 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:13:01.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:01.425 EAL: Ask a virtual area of 0x400000000 bytes 00:13:01.425 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:13:01.425 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:13:01.425 EAL: Ask a virtual area of 0x61000 bytes 00:13:01.425 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:13:01.425 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:01.425 EAL: Ask a virtual area of 0x400000000 bytes 00:13:01.425 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:13:01.425 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:13:01.425 EAL: Hugepages will be freed exactly as allocated. 00:13:01.425 EAL: No shared files mode enabled, IPC is disabled 00:13:01.425 EAL: No shared files mode enabled, IPC is disabled 00:13:01.425 EAL: TSC frequency is ~2200000 KHz 00:13:01.425 EAL: Main lcore 0 is ready (tid=7f290bebba00;cpuset=[0]) 00:13:01.425 EAL: Trying to obtain current memory policy. 00:13:01.425 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.425 EAL: Restoring previous memory policy: 0 00:13:01.425 EAL: request: mp_malloc_sync 00:13:01.425 EAL: No shared files mode enabled, IPC is disabled 00:13:01.425 EAL: Heap on socket 0 was expanded by 2MB 00:13:01.425 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:01.425 EAL: No shared files mode enabled, IPC is disabled 00:13:01.425 EAL: No PCI address specified using 'addr=' in: bus=pci 00:13:01.425 EAL: Mem event callback 'spdk:(nil)' registered 00:13:01.425 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:13:01.425 00:13:01.425 00:13:01.425 CUnit - A unit testing framework for C - Version 2.1-3 00:13:01.425 http://cunit.sourceforge.net/ 00:13:01.425 00:13:01.425 00:13:01.425 Suite: components_suite 00:13:01.425 Test: vtophys_malloc_test ...passed 00:13:01.425 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:13:01.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.426 EAL: Restoring previous memory policy: 4 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was expanded by 4MB 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was shrunk by 4MB 00:13:01.426 EAL: Trying to obtain current memory policy. 00:13:01.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.426 EAL: Restoring previous memory policy: 4 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was expanded by 6MB 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was shrunk by 6MB 00:13:01.426 EAL: Trying to obtain current memory policy. 00:13:01.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.426 EAL: Restoring previous memory policy: 4 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was expanded by 10MB 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was shrunk by 10MB 00:13:01.426 EAL: Trying to obtain current memory policy. 00:13:01.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.426 EAL: Restoring previous memory policy: 4 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was expanded by 18MB 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was shrunk by 18MB 00:13:01.426 EAL: Trying to obtain current memory policy. 00:13:01.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.426 EAL: Restoring previous memory policy: 4 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was expanded by 34MB 00:13:01.426 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.426 EAL: request: mp_malloc_sync 00:13:01.426 EAL: No shared files mode enabled, IPC is disabled 00:13:01.426 EAL: Heap on socket 0 was shrunk by 34MB 00:13:01.426 EAL: Trying to obtain current memory policy. 00:13:01.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.684 EAL: Restoring previous memory policy: 4 00:13:01.684 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.684 EAL: request: mp_malloc_sync 00:13:01.684 EAL: No shared files mode enabled, IPC is disabled 00:13:01.684 EAL: Heap on socket 0 was expanded by 66MB 00:13:01.684 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.684 EAL: request: mp_malloc_sync 00:13:01.684 EAL: No shared files mode enabled, IPC is disabled 00:13:01.684 EAL: Heap on socket 0 was shrunk by 66MB 00:13:01.684 EAL: Trying to obtain current memory policy. 00:13:01.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.684 EAL: Restoring previous memory policy: 4 00:13:01.684 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.684 EAL: request: mp_malloc_sync 00:13:01.684 EAL: No shared files mode enabled, IPC is disabled 00:13:01.684 EAL: Heap on socket 0 was expanded by 130MB 00:13:01.684 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.684 EAL: request: mp_malloc_sync 00:13:01.684 EAL: No shared files mode enabled, IPC is disabled 00:13:01.684 EAL: Heap on socket 0 was shrunk by 130MB 00:13:01.684 EAL: Trying to obtain current memory policy. 00:13:01.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.684 EAL: Restoring previous memory policy: 4 00:13:01.684 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.684 EAL: request: mp_malloc_sync 00:13:01.684 EAL: No shared files mode enabled, IPC is disabled 00:13:01.684 EAL: Heap on socket 0 was expanded by 258MB 00:13:01.684 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.942 EAL: request: mp_malloc_sync 00:13:01.942 EAL: No shared files mode enabled, IPC is disabled 00:13:01.942 EAL: Heap on socket 0 was shrunk by 258MB 00:13:01.942 EAL: Trying to obtain current memory policy. 00:13:01.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:01.942 EAL: Restoring previous memory policy: 4 00:13:01.942 EAL: Calling mem event callback 'spdk:(nil)' 00:13:01.942 EAL: request: mp_malloc_sync 00:13:01.942 EAL: No shared files mode enabled, IPC is disabled 00:13:01.942 EAL: Heap on socket 0 was expanded by 514MB 00:13:02.200 EAL: Calling mem event callback 'spdk:(nil)' 00:13:02.200 EAL: request: mp_malloc_sync 00:13:02.200 EAL: No shared files mode enabled, IPC is disabled 00:13:02.200 EAL: Heap on socket 0 was shrunk by 514MB 00:13:02.200 EAL: Trying to obtain current memory policy. 00:13:02.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:02.459 EAL: Restoring previous memory policy: 4 00:13:02.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:02.459 EAL: request: mp_malloc_sync 00:13:02.459 EAL: No shared files mode enabled, IPC is disabled 00:13:02.459 EAL: Heap on socket 0 was expanded by 1026MB 00:13:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:13:02.717 passed 00:13:02.717 00:13:02.717 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.717 suites 1 1 n/a 0 0 00:13:02.717 tests 2 2 2 0 0 00:13:02.717 asserts 5358 5358 5358 0 n/a 00:13:02.717 00:13:02.717 Elapsed time = 1.294 seconds 00:13:02.717 EAL: request: mp_malloc_sync 00:13:02.717 EAL: No shared files mode enabled, IPC is disabled 00:13:02.717 EAL: Heap on socket 0 was shrunk by 1026MB 00:13:02.717 EAL: Calling mem event callback 'spdk:(nil)' 00:13:02.717 EAL: request: mp_malloc_sync 00:13:02.717 EAL: No shared files mode enabled, IPC is disabled 00:13:02.717 EAL: Heap on socket 0 was shrunk by 2MB 00:13:02.717 EAL: No shared files mode enabled, IPC is disabled 00:13:02.717 EAL: No shared files mode enabled, IPC is disabled 00:13:02.717 EAL: No shared files mode enabled, IPC is disabled 00:13:02.975 00:13:02.975 real 0m1.485s 00:13:02.975 user 0m0.819s 00:13:02.975 sys 0m0.536s 00:13:02.975 11:00:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.975 ************************************ 00:13:02.975 END TEST env_vtophys 00:13:02.975 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 ************************************ 00:13:02.975 11:00:31 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:02.975 11:00:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.975 11:00:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.975 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 ************************************ 00:13:02.975 START TEST env_pci 00:13:02.975 ************************************ 00:13:02.975 11:00:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:02.975 00:13:02.975 00:13:02.975 CUnit - A unit testing framework for C - Version 2.1-3 00:13:02.975 http://cunit.sourceforge.net/ 00:13:02.975 00:13:02.975 00:13:02.975 Suite: pci 00:13:02.975 Test: pci_hook ...[2024-04-18 11:00:31.492110] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73007 has claimed it 00:13:02.975 passed 00:13:02.975 00:13:02.975 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.975 suites 1 1 n/a 0 0 00:13:02.975 tests 1 1 1 0 0 00:13:02.975 asserts 25 25 25 0 n/a 00:13:02.975 00:13:02.975 Elapsed time = 0.002 seconds 00:13:02.975 EAL: Cannot find device (10000:00:01.0) 00:13:02.975 EAL: Failed to attach device on primary process 00:13:02.975 00:13:02.975 real 0m0.019s 00:13:02.975 user 0m0.009s 00:13:02.975 sys 0m0.009s 00:13:02.975 11:00:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.975 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 ************************************ 00:13:02.975 END TEST env_pci 00:13:02.975 ************************************ 00:13:02.975 11:00:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:13:02.975 11:00:31 -- env/env.sh@15 -- # uname 00:13:02.975 11:00:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:13:02.975 11:00:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:13:02.975 11:00:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:02.975 11:00:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:02.975 11:00:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.975 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 ************************************ 00:13:02.975 START TEST env_dpdk_post_init 00:13:02.975 ************************************ 00:13:02.976 11:00:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:03.234 EAL: Detected CPU lcores: 10 00:13:03.234 EAL: Detected NUMA nodes: 1 00:13:03.234 EAL: Detected shared linkage of DPDK 00:13:03.234 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:03.234 EAL: Selected IOVA mode 'PA' 00:13:03.234 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:03.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:13:03.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:13:03.234 Starting DPDK initialization... 00:13:03.234 Starting SPDK post initialization... 00:13:03.234 SPDK NVMe probe 00:13:03.234 Attaching to 0000:00:10.0 00:13:03.234 Attaching to 0000:00:11.0 00:13:03.234 Attached to 0000:00:10.0 00:13:03.234 Attached to 0000:00:11.0 00:13:03.234 Cleaning up... 00:13:03.234 00:13:03.234 real 0m0.171s 00:13:03.234 user 0m0.040s 00:13:03.234 sys 0m0.032s 00:13:03.234 11:00:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.234 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.234 ************************************ 00:13:03.234 END TEST env_dpdk_post_init 00:13:03.234 ************************************ 00:13:03.234 11:00:31 -- env/env.sh@26 -- # uname 00:13:03.234 11:00:31 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:13:03.234 11:00:31 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:03.234 11:00:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:03.234 11:00:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.234 11:00:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.493 ************************************ 00:13:03.493 START TEST env_mem_callbacks 00:13:03.493 ************************************ 00:13:03.493 11:00:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:03.493 EAL: Detected CPU lcores: 10 00:13:03.493 EAL: Detected NUMA nodes: 1 00:13:03.493 EAL: Detected shared linkage of DPDK 00:13:03.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:03.493 EAL: Selected IOVA mode 'PA' 00:13:03.493 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:03.493 00:13:03.494 00:13:03.494 CUnit - A unit testing framework for C - Version 2.1-3 00:13:03.494 http://cunit.sourceforge.net/ 00:13:03.494 00:13:03.494 00:13:03.494 Suite: memory 00:13:03.494 Test: test ... 00:13:03.494 register 0x200000200000 2097152 00:13:03.494 malloc 3145728 00:13:03.494 register 0x200000400000 4194304 00:13:03.494 buf 0x200000500000 len 3145728 PASSED 00:13:03.494 malloc 64 00:13:03.494 buf 0x2000004fff40 len 64 PASSED 00:13:03.494 malloc 4194304 00:13:03.494 register 0x200000800000 6291456 00:13:03.494 buf 0x200000a00000 len 4194304 PASSED 00:13:03.494 free 0x200000500000 3145728 00:13:03.494 free 0x2000004fff40 64 00:13:03.494 unregister 0x200000400000 4194304 PASSED 00:13:03.494 free 0x200000a00000 4194304 00:13:03.494 unregister 0x200000800000 6291456 PASSED 00:13:03.494 malloc 8388608 00:13:03.494 register 0x200000400000 10485760 00:13:03.494 buf 0x200000600000 len 8388608 PASSED 00:13:03.494 free 0x200000600000 8388608 00:13:03.494 unregister 0x200000400000 10485760 PASSED 00:13:03.494 passed 00:13:03.494 00:13:03.494 Run Summary: Type Total Ran Passed Failed Inactive 00:13:03.494 suites 1 1 n/a 0 0 00:13:03.494 tests 1 1 1 0 0 00:13:03.494 asserts 15 15 15 0 n/a 00:13:03.494 00:13:03.494 Elapsed time = 0.007 seconds 00:13:03.494 00:13:03.494 real 0m0.135s 00:13:03.494 user 0m0.015s 00:13:03.494 sys 0m0.019s 00:13:03.494 11:00:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.494 11:00:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.494 ************************************ 00:13:03.494 END TEST env_mem_callbacks 00:13:03.494 ************************************ 00:13:03.494 00:13:03.494 real 0m2.642s 00:13:03.494 user 0m1.300s 00:13:03.494 sys 0m0.939s 00:13:03.494 11:00:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.494 ************************************ 00:13:03.494 END TEST env 00:13:03.494 11:00:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.494 ************************************ 00:13:03.494 11:00:32 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:03.494 11:00:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:03.494 11:00:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.494 11:00:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.763 ************************************ 00:13:03.763 START TEST rpc 00:13:03.763 ************************************ 00:13:03.763 11:00:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:03.763 * Looking for test storage... 00:13:03.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:03.763 11:00:32 -- rpc/rpc.sh@65 -- # spdk_pid=73135 00:13:03.763 11:00:32 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:13:03.763 11:00:32 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:03.763 11:00:32 -- rpc/rpc.sh@67 -- # waitforlisten 73135 00:13:03.763 11:00:32 -- common/autotest_common.sh@817 -- # '[' -z 73135 ']' 00:13:03.763 11:00:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.763 11:00:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:03.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.763 11:00:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.763 11:00:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:03.763 11:00:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.763 [2024-04-18 11:00:32.328828] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:03.763 [2024-04-18 11:00:32.328934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73135 ] 00:13:04.022 [2024-04-18 11:00:32.468054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.022 [2024-04-18 11:00:32.559690] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:13:04.022 [2024-04-18 11:00:32.559757] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73135' to capture a snapshot of events at runtime. 00:13:04.022 [2024-04-18 11:00:32.559769] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.022 [2024-04-18 11:00:32.559778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.022 [2024-04-18 11:00:32.559797] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73135 for offline analysis/debug. 00:13:04.022 [2024-04-18 11:00:32.559838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.960 11:00:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:04.960 11:00:33 -- common/autotest_common.sh@850 -- # return 0 00:13:04.960 11:00:33 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:04.960 11:00:33 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:04.960 11:00:33 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:13:04.960 11:00:33 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:13:04.960 11:00:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:04.960 11:00:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.960 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:04.960 ************************************ 00:13:04.960 START TEST rpc_integrity 00:13:04.960 ************************************ 00:13:04.960 11:00:33 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:13:04.960 11:00:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.960 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.960 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:04.960 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.960 11:00:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:04.960 11:00:33 -- rpc/rpc.sh@13 -- # jq length 00:13:04.960 11:00:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:04.960 11:00:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:04.960 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.960 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:04.960 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.960 11:00:33 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:13:04.960 11:00:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:04.960 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.960 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:04.960 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.960 11:00:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:04.960 { 00:13:04.960 "aliases": [ 00:13:04.960 "b8228a8b-dc8f-4f4c-961a-76e99a262d0c" 00:13:04.960 ], 00:13:04.960 "assigned_rate_limits": { 00:13:04.960 "r_mbytes_per_sec": 0, 00:13:04.960 "rw_ios_per_sec": 0, 00:13:04.960 "rw_mbytes_per_sec": 0, 00:13:04.960 "w_mbytes_per_sec": 0 00:13:04.960 }, 00:13:04.960 "block_size": 512, 00:13:04.960 "claimed": false, 00:13:04.960 "driver_specific": {}, 00:13:04.960 "memory_domains": [ 00:13:04.960 { 00:13:04.960 "dma_device_id": "system", 00:13:04.960 "dma_device_type": 1 00:13:04.960 }, 00:13:04.960 { 00:13:04.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.960 "dma_device_type": 2 00:13:04.960 } 00:13:04.960 ], 00:13:04.960 "name": "Malloc0", 00:13:04.960 "num_blocks": 16384, 00:13:04.960 "product_name": "Malloc disk", 00:13:04.960 "supported_io_types": { 00:13:04.960 "abort": true, 00:13:04.960 "compare": false, 00:13:04.960 "compare_and_write": false, 00:13:04.960 "flush": true, 00:13:04.960 "nvme_admin": false, 00:13:04.960 "nvme_io": false, 00:13:04.960 "read": true, 00:13:04.960 "reset": true, 00:13:04.960 "unmap": true, 00:13:04.960 "write": true, 00:13:04.960 "write_zeroes": true 00:13:04.960 }, 00:13:04.960 "uuid": "b8228a8b-dc8f-4f4c-961a-76e99a262d0c", 00:13:04.960 "zoned": false 00:13:04.960 } 00:13:04.960 ]' 00:13:04.960 11:00:33 -- rpc/rpc.sh@17 -- # jq length 00:13:04.961 11:00:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:04.961 11:00:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:13:04.961 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.961 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:04.961 [2024-04-18 11:00:33.569239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:13:04.961 [2024-04-18 11:00:33.569318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.961 [2024-04-18 11:00:33.569347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcf2fa0 00:13:04.961 [2024-04-18 11:00:33.569363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.961 [2024-04-18 11:00:33.571196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.961 [2024-04-18 11:00:33.571239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:04.961 Passthru0 00:13:04.961 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.961 11:00:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:04.961 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.961 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.219 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.219 11:00:33 -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:05.219 { 00:13:05.219 "aliases": [ 00:13:05.219 "b8228a8b-dc8f-4f4c-961a-76e99a262d0c" 00:13:05.219 ], 00:13:05.219 "assigned_rate_limits": { 00:13:05.219 "r_mbytes_per_sec": 0, 00:13:05.219 "rw_ios_per_sec": 0, 00:13:05.219 "rw_mbytes_per_sec": 0, 00:13:05.219 "w_mbytes_per_sec": 0 00:13:05.219 }, 00:13:05.219 "block_size": 512, 00:13:05.219 "claim_type": "exclusive_write", 00:13:05.219 "claimed": true, 00:13:05.219 "driver_specific": {}, 00:13:05.219 "memory_domains": [ 00:13:05.219 { 00:13:05.219 "dma_device_id": "system", 00:13:05.219 "dma_device_type": 1 00:13:05.219 }, 00:13:05.219 { 00:13:05.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.219 "dma_device_type": 2 00:13:05.219 } 00:13:05.219 ], 00:13:05.219 "name": "Malloc0", 00:13:05.219 "num_blocks": 16384, 00:13:05.219 "product_name": "Malloc disk", 00:13:05.219 "supported_io_types": { 00:13:05.219 "abort": true, 00:13:05.220 "compare": false, 00:13:05.220 "compare_and_write": false, 00:13:05.220 "flush": true, 00:13:05.220 "nvme_admin": false, 00:13:05.220 "nvme_io": false, 00:13:05.220 "read": true, 00:13:05.220 "reset": true, 00:13:05.220 "unmap": true, 00:13:05.220 "write": true, 00:13:05.220 "write_zeroes": true 00:13:05.220 }, 00:13:05.220 "uuid": "b8228a8b-dc8f-4f4c-961a-76e99a262d0c", 00:13:05.220 "zoned": false 00:13:05.220 }, 00:13:05.220 { 00:13:05.220 "aliases": [ 00:13:05.220 "93b043ed-8410-5d18-885a-e1a86c2d4086" 00:13:05.220 ], 00:13:05.220 "assigned_rate_limits": { 00:13:05.220 "r_mbytes_per_sec": 0, 00:13:05.220 "rw_ios_per_sec": 0, 00:13:05.220 "rw_mbytes_per_sec": 0, 00:13:05.220 "w_mbytes_per_sec": 0 00:13:05.220 }, 00:13:05.220 "block_size": 512, 00:13:05.220 "claimed": false, 00:13:05.220 "driver_specific": { 00:13:05.220 "passthru": { 00:13:05.220 "base_bdev_name": "Malloc0", 00:13:05.220 "name": "Passthru0" 00:13:05.220 } 00:13:05.220 }, 00:13:05.220 "memory_domains": [ 00:13:05.220 { 00:13:05.220 "dma_device_id": "system", 00:13:05.220 "dma_device_type": 1 00:13:05.220 }, 00:13:05.220 { 00:13:05.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.220 "dma_device_type": 2 00:13:05.220 } 00:13:05.220 ], 00:13:05.220 "name": "Passthru0", 00:13:05.220 "num_blocks": 16384, 00:13:05.220 "product_name": "passthru", 00:13:05.220 "supported_io_types": { 00:13:05.220 "abort": true, 00:13:05.220 "compare": false, 00:13:05.220 "compare_and_write": false, 00:13:05.220 "flush": true, 00:13:05.220 "nvme_admin": false, 00:13:05.220 "nvme_io": false, 00:13:05.220 "read": true, 00:13:05.220 "reset": true, 00:13:05.220 "unmap": true, 00:13:05.220 "write": true, 00:13:05.220 "write_zeroes": true 00:13:05.220 }, 00:13:05.220 "uuid": "93b043ed-8410-5d18-885a-e1a86c2d4086", 00:13:05.220 "zoned": false 00:13:05.220 } 00:13:05.220 ]' 00:13:05.220 11:00:33 -- rpc/rpc.sh@21 -- # jq length 00:13:05.220 11:00:33 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:05.220 11:00:33 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:05.220 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.220 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.220 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.220 11:00:33 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:05.220 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.220 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.220 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.220 11:00:33 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:05.220 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.220 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.220 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.220 11:00:33 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:05.220 11:00:33 -- rpc/rpc.sh@26 -- # jq length 00:13:05.220 11:00:33 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:05.220 00:13:05.220 real 0m0.324s 00:13:05.220 user 0m0.207s 00:13:05.220 sys 0m0.042s 00:13:05.220 11:00:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.220 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.220 ************************************ 00:13:05.220 END TEST rpc_integrity 00:13:05.220 ************************************ 00:13:05.220 11:00:33 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:13:05.220 11:00:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:05.220 11:00:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.220 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.220 ************************************ 00:13:05.220 START TEST rpc_plugins 00:13:05.220 ************************************ 00:13:05.220 11:00:33 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:13:05.220 11:00:33 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:13:05.220 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.220 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.220 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.220 11:00:33 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:13:05.220 11:00:33 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:13:05.220 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.220 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.479 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.479 11:00:33 -- rpc/rpc.sh@31 -- # bdevs='[ 00:13:05.479 { 00:13:05.479 "aliases": [ 00:13:05.479 "4e6a9042-7c68-4b5c-96a1-f01b9dbf8b1f" 00:13:05.479 ], 00:13:05.479 "assigned_rate_limits": { 00:13:05.479 "r_mbytes_per_sec": 0, 00:13:05.479 "rw_ios_per_sec": 0, 00:13:05.479 "rw_mbytes_per_sec": 0, 00:13:05.479 "w_mbytes_per_sec": 0 00:13:05.479 }, 00:13:05.479 "block_size": 4096, 00:13:05.479 "claimed": false, 00:13:05.479 "driver_specific": {}, 00:13:05.479 "memory_domains": [ 00:13:05.479 { 00:13:05.479 "dma_device_id": "system", 00:13:05.479 "dma_device_type": 1 00:13:05.479 }, 00:13:05.479 { 00:13:05.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.479 "dma_device_type": 2 00:13:05.479 } 00:13:05.479 ], 00:13:05.479 "name": "Malloc1", 00:13:05.479 "num_blocks": 256, 00:13:05.479 "product_name": "Malloc disk", 00:13:05.479 "supported_io_types": { 00:13:05.479 "abort": true, 00:13:05.479 "compare": false, 00:13:05.479 "compare_and_write": false, 00:13:05.479 "flush": true, 00:13:05.479 "nvme_admin": false, 00:13:05.479 "nvme_io": false, 00:13:05.479 "read": true, 00:13:05.479 "reset": true, 00:13:05.479 "unmap": true, 00:13:05.479 "write": true, 00:13:05.479 "write_zeroes": true 00:13:05.479 }, 00:13:05.479 "uuid": "4e6a9042-7c68-4b5c-96a1-f01b9dbf8b1f", 00:13:05.479 "zoned": false 00:13:05.479 } 00:13:05.479 ]' 00:13:05.479 11:00:33 -- rpc/rpc.sh@32 -- # jq length 00:13:05.479 11:00:33 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:13:05.479 11:00:33 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:13:05.479 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.479 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.479 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.479 11:00:33 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:13:05.479 11:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.479 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.479 11:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.479 11:00:33 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:13:05.479 11:00:33 -- rpc/rpc.sh@36 -- # jq length 00:13:05.479 11:00:33 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:13:05.479 00:13:05.479 real 0m0.142s 00:13:05.479 user 0m0.095s 00:13:05.479 sys 0m0.015s 00:13:05.479 11:00:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.479 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:13:05.479 ************************************ 00:13:05.479 END TEST rpc_plugins 00:13:05.479 ************************************ 00:13:05.479 11:00:34 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:13:05.479 11:00:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:05.479 11:00:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.479 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:05.479 ************************************ 00:13:05.479 START TEST rpc_trace_cmd_test 00:13:05.479 ************************************ 00:13:05.479 11:00:34 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:13:05.479 11:00:34 -- rpc/rpc.sh@40 -- # local info 00:13:05.479 11:00:34 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:13:05.479 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.479 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:05.479 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.479 11:00:34 -- rpc/rpc.sh@42 -- # info='{ 00:13:05.479 "bdev": { 00:13:05.479 "mask": "0x8", 00:13:05.479 "tpoint_mask": "0xffffffffffffffff" 00:13:05.479 }, 00:13:05.479 "bdev_nvme": { 00:13:05.479 "mask": "0x4000", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "blobfs": { 00:13:05.479 "mask": "0x80", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "dsa": { 00:13:05.479 "mask": "0x200", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "ftl": { 00:13:05.479 "mask": "0x40", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "iaa": { 00:13:05.479 "mask": "0x1000", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "iscsi_conn": { 00:13:05.479 "mask": "0x2", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "nvme_pcie": { 00:13:05.479 "mask": "0x800", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "nvme_tcp": { 00:13:05.479 "mask": "0x2000", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "nvmf_rdma": { 00:13:05.479 "mask": "0x10", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "nvmf_tcp": { 00:13:05.479 "mask": "0x20", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "scsi": { 00:13:05.479 "mask": "0x4", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "sock": { 00:13:05.479 "mask": "0x8000", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "thread": { 00:13:05.479 "mask": "0x400", 00:13:05.479 "tpoint_mask": "0x0" 00:13:05.479 }, 00:13:05.479 "tpoint_group_mask": "0x8", 00:13:05.479 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73135" 00:13:05.479 }' 00:13:05.479 11:00:34 -- rpc/rpc.sh@43 -- # jq length 00:13:05.738 11:00:34 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:13:05.738 11:00:34 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:13:05.738 11:00:34 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:13:05.738 11:00:34 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:13:05.738 11:00:34 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:13:05.738 11:00:34 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:13:05.738 11:00:34 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:13:05.738 11:00:34 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:13:05.738 11:00:34 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:13:05.738 00:13:05.738 real 0m0.275s 00:13:05.738 user 0m0.240s 00:13:05.738 sys 0m0.026s 00:13:05.738 11:00:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.738 ************************************ 00:13:05.738 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:05.738 END TEST rpc_trace_cmd_test 00:13:05.738 ************************************ 00:13:05.996 11:00:34 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:13:05.996 11:00:34 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:13:05.996 11:00:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:05.996 11:00:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.996 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:05.996 ************************************ 00:13:05.996 START TEST go_rpc 00:13:05.996 ************************************ 00:13:05.996 11:00:34 -- common/autotest_common.sh@1111 -- # go_rpc 00:13:05.996 11:00:34 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:13:05.996 11:00:34 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:13:05.996 11:00:34 -- rpc/rpc.sh@52 -- # jq length 00:13:05.996 11:00:34 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:13:05.996 11:00:34 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:13:05.996 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.996 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:05.996 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.996 11:00:34 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:13:05.996 11:00:34 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:13:05.996 11:00:34 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["73a383f0-7657-469f-81cd-2ee2ca503541"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"73a383f0-7657-469f-81cd-2ee2ca503541","zoned":false}]' 00:13:05.996 11:00:34 -- rpc/rpc.sh@57 -- # jq length 00:13:06.254 11:00:34 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:13:06.254 11:00:34 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:06.254 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.254 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.254 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.254 11:00:34 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:13:06.254 11:00:34 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:13:06.254 11:00:34 -- rpc/rpc.sh@61 -- # jq length 00:13:06.254 ************************************ 00:13:06.254 END TEST go_rpc 00:13:06.254 ************************************ 00:13:06.254 11:00:34 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:13:06.254 00:13:06.254 real 0m0.221s 00:13:06.254 user 0m0.148s 00:13:06.254 sys 0m0.036s 00:13:06.254 11:00:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.254 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.254 11:00:34 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:13:06.255 11:00:34 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:13:06.255 11:00:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:06.255 11:00:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.255 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.255 ************************************ 00:13:06.255 START TEST rpc_daemon_integrity 00:13:06.255 ************************************ 00:13:06.255 11:00:34 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:13:06.255 11:00:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.255 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.255 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.255 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.255 11:00:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:06.255 11:00:34 -- rpc/rpc.sh@13 -- # jq length 00:13:06.255 11:00:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:06.513 11:00:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:06.513 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.513 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.513 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.513 11:00:34 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:13:06.513 11:00:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:06.513 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.513 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.513 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.513 11:00:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:06.513 { 00:13:06.513 "aliases": [ 00:13:06.513 "606a98a5-26cd-49f4-9d6b-45ad5801e8a3" 00:13:06.513 ], 00:13:06.513 "assigned_rate_limits": { 00:13:06.513 "r_mbytes_per_sec": 0, 00:13:06.513 "rw_ios_per_sec": 0, 00:13:06.513 "rw_mbytes_per_sec": 0, 00:13:06.513 "w_mbytes_per_sec": 0 00:13:06.513 }, 00:13:06.513 "block_size": 512, 00:13:06.513 "claimed": false, 00:13:06.513 "driver_specific": {}, 00:13:06.513 "memory_domains": [ 00:13:06.513 { 00:13:06.513 "dma_device_id": "system", 00:13:06.513 "dma_device_type": 1 00:13:06.513 }, 00:13:06.513 { 00:13:06.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.513 "dma_device_type": 2 00:13:06.513 } 00:13:06.513 ], 00:13:06.513 "name": "Malloc3", 00:13:06.513 "num_blocks": 16384, 00:13:06.513 "product_name": "Malloc disk", 00:13:06.513 "supported_io_types": { 00:13:06.513 "abort": true, 00:13:06.513 "compare": false, 00:13:06.513 "compare_and_write": false, 00:13:06.513 "flush": true, 00:13:06.513 "nvme_admin": false, 00:13:06.513 "nvme_io": false, 00:13:06.513 "read": true, 00:13:06.513 "reset": true, 00:13:06.513 "unmap": true, 00:13:06.513 "write": true, 00:13:06.513 "write_zeroes": true 00:13:06.513 }, 00:13:06.513 "uuid": "606a98a5-26cd-49f4-9d6b-45ad5801e8a3", 00:13:06.513 "zoned": false 00:13:06.513 } 00:13:06.513 ]' 00:13:06.513 11:00:34 -- rpc/rpc.sh@17 -- # jq length 00:13:06.513 11:00:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:06.513 11:00:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:13:06.513 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.513 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.513 [2024-04-18 11:00:34.970840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:06.513 [2024-04-18 11:00:34.970901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.513 [2024-04-18 11:00:34.970921] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8cab0 00:13:06.513 [2024-04-18 11:00:34.970930] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.513 [2024-04-18 11:00:34.972521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.513 [2024-04-18 11:00:34.972574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:06.513 Passthru0 00:13:06.513 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.513 11:00:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:06.513 11:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.513 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.513 11:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.513 11:00:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:06.513 { 00:13:06.513 "aliases": [ 00:13:06.513 "606a98a5-26cd-49f4-9d6b-45ad5801e8a3" 00:13:06.513 ], 00:13:06.513 "assigned_rate_limits": { 00:13:06.513 "r_mbytes_per_sec": 0, 00:13:06.513 "rw_ios_per_sec": 0, 00:13:06.513 "rw_mbytes_per_sec": 0, 00:13:06.513 "w_mbytes_per_sec": 0 00:13:06.513 }, 00:13:06.513 "block_size": 512, 00:13:06.513 "claim_type": "exclusive_write", 00:13:06.513 "claimed": true, 00:13:06.513 "driver_specific": {}, 00:13:06.513 "memory_domains": [ 00:13:06.513 { 00:13:06.513 "dma_device_id": "system", 00:13:06.513 "dma_device_type": 1 00:13:06.513 }, 00:13:06.513 { 00:13:06.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.513 "dma_device_type": 2 00:13:06.513 } 00:13:06.513 ], 00:13:06.513 "name": "Malloc3", 00:13:06.513 "num_blocks": 16384, 00:13:06.513 "product_name": "Malloc disk", 00:13:06.513 "supported_io_types": { 00:13:06.513 "abort": true, 00:13:06.513 "compare": false, 00:13:06.513 "compare_and_write": false, 00:13:06.513 "flush": true, 00:13:06.513 "nvme_admin": false, 00:13:06.513 "nvme_io": false, 00:13:06.513 "read": true, 00:13:06.513 "reset": true, 00:13:06.513 "unmap": true, 00:13:06.513 "write": true, 00:13:06.513 "write_zeroes": true 00:13:06.513 }, 00:13:06.513 "uuid": "606a98a5-26cd-49f4-9d6b-45ad5801e8a3", 00:13:06.513 "zoned": false 00:13:06.513 }, 00:13:06.513 { 00:13:06.513 "aliases": [ 00:13:06.513 "b1382cf5-8575-58e9-81f5-7bac39571533" 00:13:06.513 ], 00:13:06.513 "assigned_rate_limits": { 00:13:06.513 "r_mbytes_per_sec": 0, 00:13:06.513 "rw_ios_per_sec": 0, 00:13:06.513 "rw_mbytes_per_sec": 0, 00:13:06.513 "w_mbytes_per_sec": 0 00:13:06.513 }, 00:13:06.513 "block_size": 512, 00:13:06.513 "claimed": false, 00:13:06.513 "driver_specific": { 00:13:06.513 "passthru": { 00:13:06.513 "base_bdev_name": "Malloc3", 00:13:06.513 "name": "Passthru0" 00:13:06.513 } 00:13:06.513 }, 00:13:06.513 "memory_domains": [ 00:13:06.513 { 00:13:06.513 "dma_device_id": "system", 00:13:06.513 "dma_device_type": 1 00:13:06.513 }, 00:13:06.513 { 00:13:06.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.513 "dma_device_type": 2 00:13:06.513 } 00:13:06.513 ], 00:13:06.513 "name": "Passthru0", 00:13:06.513 "num_blocks": 16384, 00:13:06.513 "product_name": "passthru", 00:13:06.513 "supported_io_types": { 00:13:06.513 "abort": true, 00:13:06.513 "compare": false, 00:13:06.513 "compare_and_write": false, 00:13:06.513 "flush": true, 00:13:06.513 "nvme_admin": false, 00:13:06.513 "nvme_io": false, 00:13:06.513 "read": true, 00:13:06.513 "reset": true, 00:13:06.513 "unmap": true, 00:13:06.513 "write": true, 00:13:06.513 "write_zeroes": true 00:13:06.513 }, 00:13:06.513 "uuid": "b1382cf5-8575-58e9-81f5-7bac39571533", 00:13:06.513 "zoned": false 00:13:06.513 } 00:13:06.513 ]' 00:13:06.513 11:00:34 -- rpc/rpc.sh@21 -- # jq length 00:13:06.513 11:00:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:06.513 11:00:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:06.513 11:00:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.513 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:06.513 11:00:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.513 11:00:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:06.513 11:00:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.513 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:06.513 11:00:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.513 11:00:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:06.513 11:00:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.513 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:06.514 11:00:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.514 11:00:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:06.514 11:00:35 -- rpc/rpc.sh@26 -- # jq length 00:13:06.514 ************************************ 00:13:06.514 END TEST rpc_daemon_integrity 00:13:06.514 ************************************ 00:13:06.514 11:00:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:06.514 00:13:06.514 real 0m0.275s 00:13:06.514 user 0m0.176s 00:13:06.514 sys 0m0.034s 00:13:06.514 11:00:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.514 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:06.514 11:00:35 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:06.514 11:00:35 -- rpc/rpc.sh@84 -- # killprocess 73135 00:13:06.514 11:00:35 -- common/autotest_common.sh@936 -- # '[' -z 73135 ']' 00:13:06.514 11:00:35 -- common/autotest_common.sh@940 -- # kill -0 73135 00:13:06.514 11:00:35 -- common/autotest_common.sh@941 -- # uname 00:13:06.772 11:00:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:06.772 11:00:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73135 00:13:06.772 killing process with pid 73135 00:13:06.772 11:00:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:06.772 11:00:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:06.772 11:00:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73135' 00:13:06.772 11:00:35 -- common/autotest_common.sh@955 -- # kill 73135 00:13:06.772 11:00:35 -- common/autotest_common.sh@960 -- # wait 73135 00:13:07.030 00:13:07.030 real 0m3.373s 00:13:07.031 user 0m4.476s 00:13:07.031 sys 0m0.868s 00:13:07.031 11:00:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.031 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.031 ************************************ 00:13:07.031 END TEST rpc 00:13:07.031 ************************************ 00:13:07.031 11:00:35 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:07.031 11:00:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:07.031 11:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.031 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.031 ************************************ 00:13:07.031 START TEST skip_rpc 00:13:07.031 ************************************ 00:13:07.031 11:00:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:07.289 * Looking for test storage... 00:13:07.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:07.289 11:00:35 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:07.289 11:00:35 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:07.289 11:00:35 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:13:07.289 11:00:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:07.289 11:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.289 11:00:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.289 ************************************ 00:13:07.289 START TEST skip_rpc 00:13:07.289 ************************************ 00:13:07.289 11:00:35 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:13:07.289 11:00:35 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=73433 00:13:07.289 11:00:35 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:13:07.289 11:00:35 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:07.289 11:00:35 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:13:07.289 [2024-04-18 11:00:35.883915] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:07.289 [2024-04-18 11:00:35.884025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73433 ] 00:13:07.547 [2024-04-18 11:00:36.025995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.547 [2024-04-18 11:00:36.119030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.823 11:00:40 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:13:12.823 11:00:40 -- common/autotest_common.sh@638 -- # local es=0 00:13:12.823 11:00:40 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:13:12.823 11:00:40 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:13:12.823 11:00:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:12.823 11:00:40 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:13:12.823 11:00:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:12.823 11:00:40 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:13:12.823 11:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.823 11:00:40 -- common/autotest_common.sh@10 -- # set +x 00:13:12.823 2024/04/18 11:00:40 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:13:12.823 11:00:40 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:12.823 11:00:40 -- common/autotest_common.sh@641 -- # es=1 00:13:12.823 11:00:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:12.823 11:00:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:12.823 11:00:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:12.823 11:00:40 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:13:12.823 11:00:40 -- rpc/skip_rpc.sh@23 -- # killprocess 73433 00:13:12.823 11:00:40 -- common/autotest_common.sh@936 -- # '[' -z 73433 ']' 00:13:12.823 11:00:40 -- common/autotest_common.sh@940 -- # kill -0 73433 00:13:12.823 11:00:40 -- common/autotest_common.sh@941 -- # uname 00:13:12.823 11:00:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.823 11:00:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73433 00:13:12.823 11:00:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:12.823 11:00:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:12.823 11:00:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73433' 00:13:12.823 killing process with pid 73433 00:13:12.823 11:00:40 -- common/autotest_common.sh@955 -- # kill 73433 00:13:12.823 11:00:40 -- common/autotest_common.sh@960 -- # wait 73433 00:13:12.823 ************************************ 00:13:12.823 END TEST skip_rpc 00:13:12.823 ************************************ 00:13:12.823 00:13:12.823 real 0m5.420s 00:13:12.823 user 0m5.031s 00:13:12.823 sys 0m0.290s 00:13:12.823 11:00:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:12.823 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:13:12.823 11:00:41 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:13:12.823 11:00:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:12.823 11:00:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.823 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:13:12.823 ************************************ 00:13:12.823 START TEST skip_rpc_with_json 00:13:12.823 ************************************ 00:13:12.823 11:00:41 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:13:12.823 11:00:41 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:13:12.823 11:00:41 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73530 00:13:12.823 11:00:41 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:12.823 11:00:41 -- rpc/skip_rpc.sh@31 -- # waitforlisten 73530 00:13:12.823 11:00:41 -- common/autotest_common.sh@817 -- # '[' -z 73530 ']' 00:13:12.823 11:00:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.823 11:00:41 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:12.823 11:00:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:12.823 11:00:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.823 11:00:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:12.823 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:13:12.823 [2024-04-18 11:00:41.418602] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:12.823 [2024-04-18 11:00:41.418711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73530 ] 00:13:13.084 [2024-04-18 11:00:41.556685] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.084 [2024-04-18 11:00:41.651453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.019 11:00:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:14.019 11:00:42 -- common/autotest_common.sh@850 -- # return 0 00:13:14.019 11:00:42 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:13:14.019 11:00:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.019 11:00:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.019 [2024-04-18 11:00:42.425596] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:13:14.019 2024/04/18 11:00:42 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:13:14.019 request: 00:13:14.019 { 00:13:14.019 "method": "nvmf_get_transports", 00:13:14.019 "params": { 00:13:14.019 "trtype": "tcp" 00:13:14.019 } 00:13:14.019 } 00:13:14.019 Got JSON-RPC error response 00:13:14.019 GoRPCClient: error on JSON-RPC call 00:13:14.019 11:00:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:14.019 11:00:42 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:13:14.019 11:00:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.019 11:00:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.019 [2024-04-18 11:00:42.437724] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.019 11:00:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.019 11:00:42 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:13:14.019 11:00:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.019 11:00:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.019 11:00:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.019 11:00:42 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:14.019 { 00:13:14.019 "subsystems": [ 00:13:14.019 { 00:13:14.019 "subsystem": "keyring", 00:13:14.019 "config": [] 00:13:14.019 }, 00:13:14.019 { 00:13:14.019 "subsystem": "iobuf", 00:13:14.019 "config": [ 00:13:14.019 { 00:13:14.019 "method": "iobuf_set_options", 00:13:14.019 "params": { 00:13:14.019 "large_bufsize": 135168, 00:13:14.019 "large_pool_count": 1024, 00:13:14.019 "small_bufsize": 8192, 00:13:14.019 "small_pool_count": 8192 00:13:14.019 } 00:13:14.019 } 00:13:14.019 ] 00:13:14.019 }, 00:13:14.019 { 00:13:14.019 "subsystem": "sock", 00:13:14.019 "config": [ 00:13:14.019 { 00:13:14.019 "method": "sock_impl_set_options", 00:13:14.019 "params": { 00:13:14.019 "enable_ktls": false, 00:13:14.019 "enable_placement_id": 0, 00:13:14.019 "enable_quickack": false, 00:13:14.019 "enable_recv_pipe": true, 00:13:14.019 "enable_zerocopy_send_client": false, 00:13:14.019 "enable_zerocopy_send_server": true, 00:13:14.019 "impl_name": "posix", 00:13:14.019 "recv_buf_size": 2097152, 00:13:14.019 "send_buf_size": 2097152, 00:13:14.019 "tls_version": 0, 00:13:14.019 "zerocopy_threshold": 0 00:13:14.019 } 00:13:14.019 }, 00:13:14.019 { 00:13:14.020 "method": "sock_impl_set_options", 00:13:14.020 "params": { 00:13:14.020 "enable_ktls": false, 00:13:14.020 "enable_placement_id": 0, 00:13:14.020 "enable_quickack": false, 00:13:14.020 "enable_recv_pipe": true, 00:13:14.020 "enable_zerocopy_send_client": false, 00:13:14.020 "enable_zerocopy_send_server": true, 00:13:14.020 "impl_name": "ssl", 00:13:14.020 "recv_buf_size": 4096, 00:13:14.020 "send_buf_size": 4096, 00:13:14.020 "tls_version": 0, 00:13:14.020 "zerocopy_threshold": 0 00:13:14.020 } 00:13:14.020 } 00:13:14.020 ] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "vmd", 00:13:14.020 "config": [] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "accel", 00:13:14.020 "config": [ 00:13:14.020 { 00:13:14.020 "method": "accel_set_options", 00:13:14.020 "params": { 00:13:14.020 "buf_count": 2048, 00:13:14.020 "large_cache_size": 16, 00:13:14.020 "sequence_count": 2048, 00:13:14.020 "small_cache_size": 128, 00:13:14.020 "task_count": 2048 00:13:14.020 } 00:13:14.020 } 00:13:14.020 ] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "bdev", 00:13:14.020 "config": [ 00:13:14.020 { 00:13:14.020 "method": "bdev_set_options", 00:13:14.020 "params": { 00:13:14.020 "bdev_auto_examine": true, 00:13:14.020 "bdev_io_cache_size": 256, 00:13:14.020 "bdev_io_pool_size": 65535, 00:13:14.020 "iobuf_large_cache_size": 16, 00:13:14.020 "iobuf_small_cache_size": 128 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "bdev_raid_set_options", 00:13:14.020 "params": { 00:13:14.020 "process_window_size_kb": 1024 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "bdev_iscsi_set_options", 00:13:14.020 "params": { 00:13:14.020 "timeout_sec": 30 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "bdev_nvme_set_options", 00:13:14.020 "params": { 00:13:14.020 "action_on_timeout": "none", 00:13:14.020 "allow_accel_sequence": false, 00:13:14.020 "arbitration_burst": 0, 00:13:14.020 "bdev_retry_count": 3, 00:13:14.020 "ctrlr_loss_timeout_sec": 0, 00:13:14.020 "delay_cmd_submit": true, 00:13:14.020 "dhchap_dhgroups": [ 00:13:14.020 "null", 00:13:14.020 "ffdhe2048", 00:13:14.020 "ffdhe3072", 00:13:14.020 "ffdhe4096", 00:13:14.020 "ffdhe6144", 00:13:14.020 "ffdhe8192" 00:13:14.020 ], 00:13:14.020 "dhchap_digests": [ 00:13:14.020 "sha256", 00:13:14.020 "sha384", 00:13:14.020 "sha512" 00:13:14.020 ], 00:13:14.020 "disable_auto_failback": false, 00:13:14.020 "fast_io_fail_timeout_sec": 0, 00:13:14.020 "generate_uuids": false, 00:13:14.020 "high_priority_weight": 0, 00:13:14.020 "io_path_stat": false, 00:13:14.020 "io_queue_requests": 0, 00:13:14.020 "keep_alive_timeout_ms": 10000, 00:13:14.020 "low_priority_weight": 0, 00:13:14.020 "medium_priority_weight": 0, 00:13:14.020 "nvme_adminq_poll_period_us": 10000, 00:13:14.020 "nvme_error_stat": false, 00:13:14.020 "nvme_ioq_poll_period_us": 0, 00:13:14.020 "rdma_cm_event_timeout_ms": 0, 00:13:14.020 "rdma_max_cq_size": 0, 00:13:14.020 "rdma_srq_size": 0, 00:13:14.020 "reconnect_delay_sec": 0, 00:13:14.020 "timeout_admin_us": 0, 00:13:14.020 "timeout_us": 0, 00:13:14.020 "transport_ack_timeout": 0, 00:13:14.020 "transport_retry_count": 4, 00:13:14.020 "transport_tos": 0 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "bdev_nvme_set_hotplug", 00:13:14.020 "params": { 00:13:14.020 "enable": false, 00:13:14.020 "period_us": 100000 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "bdev_wait_for_examine" 00:13:14.020 } 00:13:14.020 ] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "scsi", 00:13:14.020 "config": null 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "scheduler", 00:13:14.020 "config": [ 00:13:14.020 { 00:13:14.020 "method": "framework_set_scheduler", 00:13:14.020 "params": { 00:13:14.020 "name": "static" 00:13:14.020 } 00:13:14.020 } 00:13:14.020 ] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "vhost_scsi", 00:13:14.020 "config": [] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "vhost_blk", 00:13:14.020 "config": [] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "ublk", 00:13:14.020 "config": [] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "nbd", 00:13:14.020 "config": [] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "nvmf", 00:13:14.020 "config": [ 00:13:14.020 { 00:13:14.020 "method": "nvmf_set_config", 00:13:14.020 "params": { 00:13:14.020 "admin_cmd_passthru": { 00:13:14.020 "identify_ctrlr": false 00:13:14.020 }, 00:13:14.020 "discovery_filter": "match_any" 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "nvmf_set_max_subsystems", 00:13:14.020 "params": { 00:13:14.020 "max_subsystems": 1024 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "nvmf_set_crdt", 00:13:14.020 "params": { 00:13:14.020 "crdt1": 0, 00:13:14.020 "crdt2": 0, 00:13:14.020 "crdt3": 0 00:13:14.020 } 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "method": "nvmf_create_transport", 00:13:14.020 "params": { 00:13:14.020 "abort_timeout_sec": 1, 00:13:14.020 "ack_timeout": 0, 00:13:14.020 "buf_cache_size": 4294967295, 00:13:14.020 "c2h_success": true, 00:13:14.020 "dif_insert_or_strip": false, 00:13:14.020 "in_capsule_data_size": 4096, 00:13:14.020 "io_unit_size": 131072, 00:13:14.020 "max_aq_depth": 128, 00:13:14.020 "max_io_qpairs_per_ctrlr": 127, 00:13:14.020 "max_io_size": 131072, 00:13:14.020 "max_queue_depth": 128, 00:13:14.020 "num_shared_buffers": 511, 00:13:14.020 "sock_priority": 0, 00:13:14.020 "trtype": "TCP", 00:13:14.020 "zcopy": false 00:13:14.020 } 00:13:14.020 } 00:13:14.020 ] 00:13:14.020 }, 00:13:14.020 { 00:13:14.020 "subsystem": "iscsi", 00:13:14.020 "config": [ 00:13:14.020 { 00:13:14.020 "method": "iscsi_set_options", 00:13:14.020 "params": { 00:13:14.020 "allow_duplicated_isid": false, 00:13:14.020 "chap_group": 0, 00:13:14.020 "data_out_pool_size": 2048, 00:13:14.020 "default_time2retain": 20, 00:13:14.020 "default_time2wait": 2, 00:13:14.020 "disable_chap": false, 00:13:14.020 "error_recovery_level": 0, 00:13:14.020 "first_burst_length": 8192, 00:13:14.020 "immediate_data": true, 00:13:14.020 "immediate_data_pool_size": 16384, 00:13:14.020 "max_connections_per_session": 2, 00:13:14.020 "max_large_datain_per_connection": 64, 00:13:14.020 "max_queue_depth": 64, 00:13:14.020 "max_r2t_per_connection": 4, 00:13:14.020 "max_sessions": 128, 00:13:14.020 "mutual_chap": false, 00:13:14.020 "node_base": "iqn.2016-06.io.spdk", 00:13:14.020 "nop_in_interval": 30, 00:13:14.020 "nop_timeout": 60, 00:13:14.020 "pdu_pool_size": 36864, 00:13:14.020 "require_chap": false 00:13:14.020 } 00:13:14.020 } 00:13:14.020 ] 00:13:14.020 } 00:13:14.020 ] 00:13:14.020 } 00:13:14.020 11:00:42 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:14.020 11:00:42 -- rpc/skip_rpc.sh@40 -- # killprocess 73530 00:13:14.020 11:00:42 -- common/autotest_common.sh@936 -- # '[' -z 73530 ']' 00:13:14.020 11:00:42 -- common/autotest_common.sh@940 -- # kill -0 73530 00:13:14.020 11:00:42 -- common/autotest_common.sh@941 -- # uname 00:13:14.020 11:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:14.020 11:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73530 00:13:14.020 killing process with pid 73530 00:13:14.020 11:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:14.020 11:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:14.020 11:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73530' 00:13:14.020 11:00:42 -- common/autotest_common.sh@955 -- # kill 73530 00:13:14.020 11:00:42 -- common/autotest_common.sh@960 -- # wait 73530 00:13:14.587 11:00:43 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73564 00:13:14.587 11:00:43 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:14.587 11:00:43 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:13:19.916 11:00:48 -- rpc/skip_rpc.sh@50 -- # killprocess 73564 00:13:19.916 11:00:48 -- common/autotest_common.sh@936 -- # '[' -z 73564 ']' 00:13:19.916 11:00:48 -- common/autotest_common.sh@940 -- # kill -0 73564 00:13:19.916 11:00:48 -- common/autotest_common.sh@941 -- # uname 00:13:19.916 11:00:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.916 11:00:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73564 00:13:19.916 killing process with pid 73564 00:13:19.916 11:00:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:19.916 11:00:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:19.916 11:00:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73564' 00:13:19.916 11:00:48 -- common/autotest_common.sh@955 -- # kill 73564 00:13:19.916 11:00:48 -- common/autotest_common.sh@960 -- # wait 73564 00:13:19.916 11:00:48 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:19.916 11:00:48 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:19.916 00:13:19.916 real 0m7.073s 00:13:19.916 user 0m6.856s 00:13:19.916 sys 0m0.630s 00:13:19.916 11:00:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:19.916 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:19.916 ************************************ 00:13:19.916 END TEST skip_rpc_with_json 00:13:19.916 ************************************ 00:13:19.916 11:00:48 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:19.916 11:00:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:19.916 11:00:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.916 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:19.916 ************************************ 00:13:19.916 START TEST skip_rpc_with_delay 00:13:19.916 ************************************ 00:13:19.916 11:00:48 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:13:19.916 11:00:48 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:19.916 11:00:48 -- common/autotest_common.sh@638 -- # local es=0 00:13:19.916 11:00:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:19.917 11:00:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:19.917 11:00:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:19.917 11:00:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:19.917 11:00:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:19.917 11:00:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:19.917 11:00:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:19.917 11:00:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:19.917 11:00:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:19.917 11:00:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:20.175 [2024-04-18 11:00:48.596904] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:20.175 [2024-04-18 11:00:48.597017] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:13:20.175 11:00:48 -- common/autotest_common.sh@641 -- # es=1 00:13:20.175 ************************************ 00:13:20.175 END TEST skip_rpc_with_delay 00:13:20.175 ************************************ 00:13:20.175 11:00:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:20.175 11:00:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:20.175 11:00:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:20.175 00:13:20.175 real 0m0.075s 00:13:20.175 user 0m0.053s 00:13:20.175 sys 0m0.022s 00:13:20.175 11:00:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:20.175 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 11:00:48 -- rpc/skip_rpc.sh@77 -- # uname 00:13:20.175 11:00:48 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:20.175 11:00:48 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:20.175 11:00:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:20.175 11:00:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.175 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 ************************************ 00:13:20.175 START TEST exit_on_failed_rpc_init 00:13:20.175 ************************************ 00:13:20.175 11:00:48 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:13:20.175 11:00:48 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73687 00:13:20.175 11:00:48 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:20.175 11:00:48 -- rpc/skip_rpc.sh@63 -- # waitforlisten 73687 00:13:20.175 11:00:48 -- common/autotest_common.sh@817 -- # '[' -z 73687 ']' 00:13:20.175 11:00:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.175 11:00:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:20.175 11:00:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.175 11:00:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:20.175 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:20.175 [2024-04-18 11:00:48.789540] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:20.175 [2024-04-18 11:00:48.789627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73687 ] 00:13:20.434 [2024-04-18 11:00:48.926627] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.434 [2024-04-18 11:00:49.017018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.371 11:00:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:21.371 11:00:49 -- common/autotest_common.sh@850 -- # return 0 00:13:21.371 11:00:49 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:21.371 11:00:49 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:21.371 11:00:49 -- common/autotest_common.sh@638 -- # local es=0 00:13:21.371 11:00:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:21.371 11:00:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:21.371 11:00:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:21.371 11:00:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:21.371 11:00:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:21.371 11:00:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:21.371 11:00:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:21.371 11:00:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:21.371 11:00:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:21.371 11:00:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:21.371 [2024-04-18 11:00:49.831279] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:21.371 [2024-04-18 11:00:49.831386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73717 ] 00:13:21.371 [2024-04-18 11:00:49.975478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.629 [2024-04-18 11:00:50.075302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.629 [2024-04-18 11:00:50.075415] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:21.629 [2024-04-18 11:00:50.075433] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:21.629 [2024-04-18 11:00:50.075444] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:21.629 11:00:50 -- common/autotest_common.sh@641 -- # es=234 00:13:21.629 11:00:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:21.629 11:00:50 -- common/autotest_common.sh@650 -- # es=106 00:13:21.629 11:00:50 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:21.629 11:00:50 -- common/autotest_common.sh@658 -- # es=1 00:13:21.629 11:00:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:21.629 11:00:50 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:21.629 11:00:50 -- rpc/skip_rpc.sh@70 -- # killprocess 73687 00:13:21.629 11:00:50 -- common/autotest_common.sh@936 -- # '[' -z 73687 ']' 00:13:21.629 11:00:50 -- common/autotest_common.sh@940 -- # kill -0 73687 00:13:21.629 11:00:50 -- common/autotest_common.sh@941 -- # uname 00:13:21.629 11:00:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:21.629 11:00:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73687 00:13:21.629 killing process with pid 73687 00:13:21.629 11:00:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:21.629 11:00:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:21.629 11:00:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73687' 00:13:21.629 11:00:50 -- common/autotest_common.sh@955 -- # kill 73687 00:13:21.629 11:00:50 -- common/autotest_common.sh@960 -- # wait 73687 00:13:22.211 00:13:22.211 real 0m1.842s 00:13:22.211 user 0m2.156s 00:13:22.211 sys 0m0.419s 00:13:22.211 11:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.211 ************************************ 00:13:22.211 END TEST exit_on_failed_rpc_init 00:13:22.211 ************************************ 00:13:22.211 11:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:22.211 11:00:50 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:22.211 00:13:22.211 real 0m14.953s 00:13:22.211 user 0m14.293s 00:13:22.211 sys 0m1.646s 00:13:22.211 11:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.211 11:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:22.211 ************************************ 00:13:22.211 END TEST skip_rpc 00:13:22.211 ************************************ 00:13:22.211 11:00:50 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:22.211 11:00:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:22.211 11:00:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.211 11:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:22.211 ************************************ 00:13:22.211 START TEST rpc_client 00:13:22.211 ************************************ 00:13:22.211 11:00:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:22.211 * Looking for test storage... 00:13:22.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:13:22.211 11:00:50 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:13:22.211 OK 00:13:22.211 11:00:50 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:22.211 00:13:22.211 real 0m0.108s 00:13:22.211 user 0m0.051s 00:13:22.211 sys 0m0.062s 00:13:22.211 ************************************ 00:13:22.211 END TEST rpc_client 00:13:22.211 ************************************ 00:13:22.211 11:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.211 11:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:22.469 11:00:50 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:22.469 11:00:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:22.469 11:00:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.469 11:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:22.469 ************************************ 00:13:22.469 START TEST json_config 00:13:22.469 ************************************ 00:13:22.469 11:00:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:22.469 11:00:51 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:22.469 11:00:51 -- nvmf/common.sh@7 -- # uname -s 00:13:22.469 11:00:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.469 11:00:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.469 11:00:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.469 11:00:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.469 11:00:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.469 11:00:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.469 11:00:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.469 11:00:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.469 11:00:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.469 11:00:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.469 11:00:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:13:22.470 11:00:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:13:22.470 11:00:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.470 11:00:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.470 11:00:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:22.470 11:00:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.470 11:00:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:22.470 11:00:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.470 11:00:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.470 11:00:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.470 11:00:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.470 11:00:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.470 11:00:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.470 11:00:51 -- paths/export.sh@5 -- # export PATH 00:13:22.470 11:00:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.470 11:00:51 -- nvmf/common.sh@47 -- # : 0 00:13:22.470 11:00:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.470 11:00:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.470 11:00:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.470 11:00:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.470 11:00:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.470 11:00:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.470 11:00:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.470 11:00:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.470 11:00:51 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:22.470 11:00:51 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:22.470 11:00:51 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:22.470 11:00:51 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:22.470 11:00:51 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:22.470 11:00:51 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:13:22.470 11:00:51 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:13:22.470 11:00:51 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:13:22.470 11:00:51 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:13:22.470 11:00:51 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:13:22.470 11:00:51 -- json_config/json_config.sh@33 -- # declare -A app_params 00:13:22.470 11:00:51 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:13:22.470 11:00:51 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:13:22.470 11:00:51 -- json_config/json_config.sh@40 -- # last_event_id=0 00:13:22.470 11:00:51 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:22.470 11:00:51 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:13:22.470 INFO: JSON configuration test init 00:13:22.470 11:00:51 -- json_config/json_config.sh@357 -- # json_config_test_init 00:13:22.470 11:00:51 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:13:22.470 11:00:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:22.470 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:13:22.470 11:00:51 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:13:22.470 11:00:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:22.470 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:13:22.470 11:00:51 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:13:22.470 11:00:51 -- json_config/common.sh@9 -- # local app=target 00:13:22.470 11:00:51 -- json_config/common.sh@10 -- # shift 00:13:22.470 11:00:51 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:22.470 11:00:51 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:22.470 11:00:51 -- json_config/common.sh@15 -- # local app_extra_params= 00:13:22.470 11:00:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:22.470 11:00:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:22.470 11:00:51 -- json_config/common.sh@22 -- # app_pid["$app"]=73847 00:13:22.470 11:00:51 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:22.470 Waiting for target to run... 00:13:22.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:22.470 11:00:51 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:13:22.470 11:00:51 -- json_config/common.sh@25 -- # waitforlisten 73847 /var/tmp/spdk_tgt.sock 00:13:22.470 11:00:51 -- common/autotest_common.sh@817 -- # '[' -z 73847 ']' 00:13:22.470 11:00:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:22.470 11:00:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:22.470 11:00:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:22.470 11:00:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:22.470 11:00:51 -- common/autotest_common.sh@10 -- # set +x 00:13:22.727 [2024-04-18 11:00:51.124763] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:22.727 [2024-04-18 11:00:51.125215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73847 ] 00:13:22.996 [2024-04-18 11:00:51.554134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.996 [2024-04-18 11:00:51.626779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.568 00:13:23.568 11:00:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:23.568 11:00:52 -- common/autotest_common.sh@850 -- # return 0 00:13:23.568 11:00:52 -- json_config/common.sh@26 -- # echo '' 00:13:23.568 11:00:52 -- json_config/json_config.sh@269 -- # create_accel_config 00:13:23.568 11:00:52 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:13:23.568 11:00:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:23.568 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:23.568 11:00:52 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:13:23.568 11:00:52 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:13:23.568 11:00:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:23.568 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:23.568 11:00:52 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:13:23.568 11:00:52 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:13:23.568 11:00:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:13:24.135 11:00:52 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:13:24.135 11:00:52 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:13:24.135 11:00:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:24.135 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:24.135 11:00:52 -- json_config/json_config.sh@45 -- # local ret=0 00:13:24.135 11:00:52 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:13:24.135 11:00:52 -- json_config/json_config.sh@46 -- # local enabled_types 00:13:24.135 11:00:52 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:13:24.135 11:00:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:13:24.135 11:00:52 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:13:24.393 11:00:52 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:13:24.393 11:00:52 -- json_config/json_config.sh@48 -- # local get_types 00:13:24.393 11:00:52 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:13:24.393 11:00:52 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:13:24.393 11:00:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:24.393 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:24.393 11:00:52 -- json_config/json_config.sh@55 -- # return 0 00:13:24.393 11:00:52 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:13:24.393 11:00:52 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:13:24.393 11:00:52 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:13:24.393 11:00:52 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:13:24.393 11:00:52 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:13:24.393 11:00:52 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:13:24.393 11:00:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:24.393 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:24.393 11:00:52 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:13:24.393 11:00:52 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:13:24.393 11:00:52 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:13:24.393 11:00:52 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:24.393 11:00:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:24.651 MallocForNvmf0 00:13:24.651 11:00:53 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:24.651 11:00:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:25.218 MallocForNvmf1 00:13:25.218 11:00:53 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:13:25.218 11:00:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:13:25.522 [2024-04-18 11:00:53.889407] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.522 11:00:53 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:25.522 11:00:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:25.781 11:00:54 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:25.781 11:00:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:26.039 11:00:54 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:26.039 11:00:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:26.297 11:00:54 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:26.297 11:00:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:26.297 [2024-04-18 11:00:54.937954] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:26.556 11:00:54 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:13:26.556 11:00:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:26.556 11:00:54 -- common/autotest_common.sh@10 -- # set +x 00:13:26.556 11:00:54 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:13:26.556 11:00:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:26.556 11:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:26.556 11:00:55 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:13:26.556 11:00:55 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:26.556 11:00:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:26.815 MallocBdevForConfigChangeCheck 00:13:26.815 11:00:55 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:13:26.816 11:00:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:26.816 11:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:26.816 11:00:55 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:13:26.816 11:00:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:27.074 INFO: shutting down applications... 00:13:27.074 11:00:55 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:13:27.075 11:00:55 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:13:27.075 11:00:55 -- json_config/json_config.sh@368 -- # json_config_clear target 00:13:27.075 11:00:55 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:13:27.075 11:00:55 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:13:27.338 Calling clear_iscsi_subsystem 00:13:27.338 Calling clear_nvmf_subsystem 00:13:27.338 Calling clear_nbd_subsystem 00:13:27.338 Calling clear_ublk_subsystem 00:13:27.338 Calling clear_vhost_blk_subsystem 00:13:27.338 Calling clear_vhost_scsi_subsystem 00:13:27.338 Calling clear_bdev_subsystem 00:13:27.598 11:00:55 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:13:27.598 11:00:55 -- json_config/json_config.sh@343 -- # count=100 00:13:27.598 11:00:55 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:13:27.598 11:00:55 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:27.598 11:00:55 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:13:27.598 11:00:55 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:13:27.857 11:00:56 -- json_config/json_config.sh@345 -- # break 00:13:27.857 11:00:56 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:13:27.857 11:00:56 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:13:27.857 11:00:56 -- json_config/common.sh@31 -- # local app=target 00:13:27.857 11:00:56 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:27.857 11:00:56 -- json_config/common.sh@35 -- # [[ -n 73847 ]] 00:13:27.857 11:00:56 -- json_config/common.sh@38 -- # kill -SIGINT 73847 00:13:27.857 11:00:56 -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:27.857 11:00:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:27.857 11:00:56 -- json_config/common.sh@41 -- # kill -0 73847 00:13:27.857 11:00:56 -- json_config/common.sh@45 -- # sleep 0.5 00:13:28.438 11:00:56 -- json_config/common.sh@40 -- # (( i++ )) 00:13:28.438 11:00:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:28.438 11:00:56 -- json_config/common.sh@41 -- # kill -0 73847 00:13:28.438 11:00:56 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:28.438 11:00:56 -- json_config/common.sh@43 -- # break 00:13:28.438 11:00:56 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:28.438 SPDK target shutdown done 00:13:28.438 11:00:56 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:28.438 INFO: relaunching applications... 00:13:28.438 11:00:56 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:13:28.438 11:00:56 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:28.438 11:00:56 -- json_config/common.sh@9 -- # local app=target 00:13:28.438 11:00:56 -- json_config/common.sh@10 -- # shift 00:13:28.438 11:00:56 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:28.438 11:00:56 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:28.438 11:00:56 -- json_config/common.sh@15 -- # local app_extra_params= 00:13:28.438 11:00:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:28.438 11:00:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:28.438 11:00:56 -- json_config/common.sh@22 -- # app_pid["$app"]=74127 00:13:28.438 11:00:56 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:28.438 Waiting for target to run... 00:13:28.438 11:00:56 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:28.438 11:00:56 -- json_config/common.sh@25 -- # waitforlisten 74127 /var/tmp/spdk_tgt.sock 00:13:28.438 11:00:56 -- common/autotest_common.sh@817 -- # '[' -z 74127 ']' 00:13:28.438 11:00:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:28.438 11:00:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:28.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:28.438 11:00:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:28.438 11:00:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:28.438 11:00:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.438 [2024-04-18 11:00:56.935284] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:28.438 [2024-04-18 11:00:56.935396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74127 ] 00:13:29.012 [2024-04-18 11:00:57.365606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.012 [2024-04-18 11:00:57.434215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.269 [2024-04-18 11:00:57.734184] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.269 [2024-04-18 11:00:57.766266] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:29.527 11:00:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:29.527 11:00:57 -- common/autotest_common.sh@850 -- # return 0 00:13:29.527 00:13:29.527 INFO: Checking if target configuration is the same... 00:13:29.527 11:00:57 -- json_config/common.sh@26 -- # echo '' 00:13:29.527 11:00:57 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:13:29.527 11:00:57 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:13:29.527 11:00:57 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:29.527 11:00:57 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:13:29.527 11:00:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:29.527 + '[' 2 -ne 2 ']' 00:13:29.527 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:13:29.527 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:13:29.527 + rootdir=/home/vagrant/spdk_repo/spdk 00:13:29.527 +++ basename /dev/fd/62 00:13:29.527 ++ mktemp /tmp/62.XXX 00:13:29.527 + tmp_file_1=/tmp/62.BJ6 00:13:29.527 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:29.527 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:13:29.527 + tmp_file_2=/tmp/spdk_tgt_config.json.TwP 00:13:29.527 + ret=0 00:13:29.527 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:29.792 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:29.792 + diff -u /tmp/62.BJ6 /tmp/spdk_tgt_config.json.TwP 00:13:29.792 INFO: JSON config files are the same 00:13:29.792 + echo 'INFO: JSON config files are the same' 00:13:29.792 + rm /tmp/62.BJ6 /tmp/spdk_tgt_config.json.TwP 00:13:29.792 + exit 0 00:13:29.792 INFO: changing configuration and checking if this can be detected... 00:13:29.792 11:00:58 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:13:29.792 11:00:58 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:13:29.792 11:00:58 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:13:29.792 11:00:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:13:30.361 11:00:58 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:30.361 11:00:58 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:13:30.361 11:00:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:30.361 + '[' 2 -ne 2 ']' 00:13:30.361 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:13:30.361 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:13:30.361 + rootdir=/home/vagrant/spdk_repo/spdk 00:13:30.361 +++ basename /dev/fd/62 00:13:30.361 ++ mktemp /tmp/62.XXX 00:13:30.361 + tmp_file_1=/tmp/62.3x1 00:13:30.361 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:30.361 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:13:30.361 + tmp_file_2=/tmp/spdk_tgt_config.json.tlF 00:13:30.361 + ret=0 00:13:30.361 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:30.620 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:13:30.620 + diff -u /tmp/62.3x1 /tmp/spdk_tgt_config.json.tlF 00:13:30.620 + ret=1 00:13:30.620 + echo '=== Start of file: /tmp/62.3x1 ===' 00:13:30.620 + cat /tmp/62.3x1 00:13:30.620 + echo '=== End of file: /tmp/62.3x1 ===' 00:13:30.620 + echo '' 00:13:30.620 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tlF ===' 00:13:30.620 + cat /tmp/spdk_tgt_config.json.tlF 00:13:30.620 + echo '=== End of file: /tmp/spdk_tgt_config.json.tlF ===' 00:13:30.620 + echo '' 00:13:30.620 + rm /tmp/62.3x1 /tmp/spdk_tgt_config.json.tlF 00:13:30.620 + exit 1 00:13:30.620 INFO: configuration change detected. 00:13:30.620 11:00:59 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:13:30.620 11:00:59 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:13:30.620 11:00:59 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:13:30.620 11:00:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:30.620 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:30.620 11:00:59 -- json_config/json_config.sh@307 -- # local ret=0 00:13:30.620 11:00:59 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:13:30.620 11:00:59 -- json_config/json_config.sh@317 -- # [[ -n 74127 ]] 00:13:30.620 11:00:59 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:13:30.620 11:00:59 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:13:30.620 11:00:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:30.620 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:30.620 11:00:59 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:13:30.620 11:00:59 -- json_config/json_config.sh@193 -- # uname -s 00:13:30.620 11:00:59 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:13:30.620 11:00:59 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:13:30.620 11:00:59 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:13:30.620 11:00:59 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:13:30.620 11:00:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:30.620 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:30.620 11:00:59 -- json_config/json_config.sh@323 -- # killprocess 74127 00:13:30.620 11:00:59 -- common/autotest_common.sh@936 -- # '[' -z 74127 ']' 00:13:30.620 11:00:59 -- common/autotest_common.sh@940 -- # kill -0 74127 00:13:30.620 11:00:59 -- common/autotest_common.sh@941 -- # uname 00:13:30.878 11:00:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.878 11:00:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74127 00:13:30.878 killing process with pid 74127 00:13:30.878 11:00:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:30.878 11:00:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:30.878 11:00:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74127' 00:13:30.878 11:00:59 -- common/autotest_common.sh@955 -- # kill 74127 00:13:30.878 11:00:59 -- common/autotest_common.sh@960 -- # wait 74127 00:13:30.878 11:00:59 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:13:30.878 11:00:59 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:13:30.878 11:00:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:30.878 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:31.136 11:00:59 -- json_config/json_config.sh@328 -- # return 0 00:13:31.136 INFO: Success 00:13:31.136 11:00:59 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:13:31.136 00:13:31.136 real 0m8.594s 00:13:31.136 user 0m12.335s 00:13:31.136 sys 0m1.930s 00:13:31.136 ************************************ 00:13:31.136 END TEST json_config 00:13:31.136 ************************************ 00:13:31.136 11:00:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:31.136 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:31.136 11:00:59 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:31.136 11:00:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:31.136 11:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.136 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:31.136 ************************************ 00:13:31.136 START TEST json_config_extra_key 00:13:31.136 ************************************ 00:13:31.136 11:00:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:31.136 11:00:59 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:31.136 11:00:59 -- nvmf/common.sh@7 -- # uname -s 00:13:31.136 11:00:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.136 11:00:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.136 11:00:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.136 11:00:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.136 11:00:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.136 11:00:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.136 11:00:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.136 11:00:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.136 11:00:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.136 11:00:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.136 11:00:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:13:31.136 11:00:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:13:31.136 11:00:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.136 11:00:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.136 11:00:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:31.136 11:00:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.137 11:00:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:31.137 11:00:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.137 11:00:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.137 11:00:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.137 11:00:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.137 11:00:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.137 11:00:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.137 11:00:59 -- paths/export.sh@5 -- # export PATH 00:13:31.137 11:00:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.137 11:00:59 -- nvmf/common.sh@47 -- # : 0 00:13:31.137 11:00:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.137 11:00:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.137 11:00:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.137 11:00:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.137 11:00:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.137 11:00:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.137 11:00:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.137 11:00:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:13:31.137 INFO: launching applications... 00:13:31.137 11:00:59 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:31.137 11:00:59 -- json_config/common.sh@9 -- # local app=target 00:13:31.137 11:00:59 -- json_config/common.sh@10 -- # shift 00:13:31.137 11:00:59 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:31.137 11:00:59 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:31.137 Waiting for target to run... 00:13:31.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:31.137 11:00:59 -- json_config/common.sh@15 -- # local app_extra_params= 00:13:31.137 11:00:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:31.137 11:00:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:31.137 11:00:59 -- json_config/common.sh@22 -- # app_pid["$app"]=74309 00:13:31.137 11:00:59 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:31.137 11:00:59 -- json_config/common.sh@25 -- # waitforlisten 74309 /var/tmp/spdk_tgt.sock 00:13:31.137 11:00:59 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:31.137 11:00:59 -- common/autotest_common.sh@817 -- # '[' -z 74309 ']' 00:13:31.137 11:00:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:31.137 11:00:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:31.137 11:00:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:31.137 11:00:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:31.137 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:13:31.395 [2024-04-18 11:00:59.802906] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:31.395 [2024-04-18 11:00:59.803197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74309 ] 00:13:31.652 [2024-04-18 11:01:00.210479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.652 [2024-04-18 11:01:00.280162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.218 11:01:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:32.476 11:01:00 -- common/autotest_common.sh@850 -- # return 0 00:13:32.476 11:01:00 -- json_config/common.sh@26 -- # echo '' 00:13:32.476 00:13:32.476 11:01:00 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:13:32.476 INFO: shutting down applications... 00:13:32.476 11:01:00 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:13:32.476 11:01:00 -- json_config/common.sh@31 -- # local app=target 00:13:32.476 11:01:00 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:32.476 11:01:00 -- json_config/common.sh@35 -- # [[ -n 74309 ]] 00:13:32.476 11:01:00 -- json_config/common.sh@38 -- # kill -SIGINT 74309 00:13:32.476 11:01:00 -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:32.476 11:01:00 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:32.476 11:01:00 -- json_config/common.sh@41 -- # kill -0 74309 00:13:32.476 11:01:00 -- json_config/common.sh@45 -- # sleep 0.5 00:13:32.734 11:01:01 -- json_config/common.sh@40 -- # (( i++ )) 00:13:32.734 11:01:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:32.734 11:01:01 -- json_config/common.sh@41 -- # kill -0 74309 00:13:32.734 11:01:01 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:32.734 SPDK target shutdown done 00:13:32.734 Success 00:13:32.734 11:01:01 -- json_config/common.sh@43 -- # break 00:13:32.734 11:01:01 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:32.734 11:01:01 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:32.734 11:01:01 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:13:32.734 ************************************ 00:13:32.734 END TEST json_config_extra_key 00:13:32.734 ************************************ 00:13:32.734 00:13:32.734 real 0m1.711s 00:13:32.734 user 0m1.685s 00:13:32.734 sys 0m0.430s 00:13:32.734 11:01:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:32.734 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:13:32.992 11:01:01 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:32.992 11:01:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:32.992 11:01:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:32.992 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:13:32.992 ************************************ 00:13:32.992 START TEST alias_rpc 00:13:32.992 ************************************ 00:13:32.992 11:01:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:32.992 * Looking for test storage... 00:13:32.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:13:32.992 11:01:01 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:32.992 11:01:01 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74391 00:13:32.992 11:01:01 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:32.992 11:01:01 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74391 00:13:32.992 11:01:01 -- common/autotest_common.sh@817 -- # '[' -z 74391 ']' 00:13:32.992 11:01:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.992 11:01:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:32.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.992 11:01:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.992 11:01:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:32.992 11:01:01 -- common/autotest_common.sh@10 -- # set +x 00:13:32.992 [2024-04-18 11:01:01.631891] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:32.992 [2024-04-18 11:01:01.631998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74391 ] 00:13:33.250 [2024-04-18 11:01:01.773100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.250 [2024-04-18 11:01:01.868687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.182 11:01:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:34.182 11:01:02 -- common/autotest_common.sh@850 -- # return 0 00:13:34.182 11:01:02 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:13:34.440 11:01:02 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74391 00:13:34.440 11:01:02 -- common/autotest_common.sh@936 -- # '[' -z 74391 ']' 00:13:34.440 11:01:02 -- common/autotest_common.sh@940 -- # kill -0 74391 00:13:34.440 11:01:02 -- common/autotest_common.sh@941 -- # uname 00:13:34.440 11:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:34.440 11:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74391 00:13:34.440 killing process with pid 74391 00:13:34.440 11:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:34.440 11:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:34.440 11:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74391' 00:13:34.440 11:01:02 -- common/autotest_common.sh@955 -- # kill 74391 00:13:34.440 11:01:02 -- common/autotest_common.sh@960 -- # wait 74391 00:13:34.698 ************************************ 00:13:34.698 END TEST alias_rpc 00:13:34.698 ************************************ 00:13:34.698 00:13:34.698 real 0m1.821s 00:13:34.698 user 0m2.070s 00:13:34.698 sys 0m0.463s 00:13:34.698 11:01:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:34.698 11:01:03 -- common/autotest_common.sh@10 -- # set +x 00:13:34.956 11:01:03 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:13:34.956 11:01:03 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:34.956 11:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:34.956 11:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.956 11:01:03 -- common/autotest_common.sh@10 -- # set +x 00:13:34.956 ************************************ 00:13:34.956 START TEST dpdk_mem_utility 00:13:34.956 ************************************ 00:13:34.956 11:01:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:34.956 * Looking for test storage... 00:13:34.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:13:34.956 11:01:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:34.956 11:01:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74489 00:13:34.956 11:01:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:34.956 11:01:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74489 00:13:34.956 11:01:03 -- common/autotest_common.sh@817 -- # '[' -z 74489 ']' 00:13:34.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.956 11:01:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.956 11:01:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:34.956 11:01:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.956 11:01:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:34.956 11:01:03 -- common/autotest_common.sh@10 -- # set +x 00:13:35.214 [2024-04-18 11:01:03.597883] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:35.214 [2024-04-18 11:01:03.597995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74489 ] 00:13:35.214 [2024-04-18 11:01:03.735919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.214 [2024-04-18 11:01:03.830471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.163 11:01:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:36.163 11:01:04 -- common/autotest_common.sh@850 -- # return 0 00:13:36.163 11:01:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:13:36.163 11:01:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:13:36.163 11:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.163 11:01:04 -- common/autotest_common.sh@10 -- # set +x 00:13:36.163 { 00:13:36.163 "filename": "/tmp/spdk_mem_dump.txt" 00:13:36.163 } 00:13:36.163 11:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.163 11:01:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:36.163 DPDK memory size 814.000000 MiB in 1 heap(s) 00:13:36.163 1 heaps totaling size 814.000000 MiB 00:13:36.163 size: 814.000000 MiB heap id: 0 00:13:36.163 end heaps---------- 00:13:36.163 8 mempools totaling size 598.116089 MiB 00:13:36.163 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:13:36.163 size: 158.602051 MiB name: PDU_data_out_Pool 00:13:36.163 size: 84.521057 MiB name: bdev_io_74489 00:13:36.163 size: 51.011292 MiB name: evtpool_74489 00:13:36.163 size: 50.003479 MiB name: msgpool_74489 00:13:36.163 size: 21.763794 MiB name: PDU_Pool 00:13:36.163 size: 19.513306 MiB name: SCSI_TASK_Pool 00:13:36.163 size: 0.026123 MiB name: Session_Pool 00:13:36.163 end mempools------- 00:13:36.163 6 memzones totaling size 4.142822 MiB 00:13:36.163 size: 1.000366 MiB name: RG_ring_0_74489 00:13:36.163 size: 1.000366 MiB name: RG_ring_1_74489 00:13:36.163 size: 1.000366 MiB name: RG_ring_4_74489 00:13:36.163 size: 1.000366 MiB name: RG_ring_5_74489 00:13:36.163 size: 0.125366 MiB name: RG_ring_2_74489 00:13:36.163 size: 0.015991 MiB name: RG_ring_3_74489 00:13:36.163 end memzones------- 00:13:36.163 11:01:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:13:36.163 heap id: 0 total size: 814.000000 MiB number of busy elements: 234 number of free elements: 15 00:13:36.163 list of free elements. size: 12.484009 MiB 00:13:36.163 element at address: 0x200000400000 with size: 1.999512 MiB 00:13:36.163 element at address: 0x200018e00000 with size: 0.999878 MiB 00:13:36.163 element at address: 0x200019000000 with size: 0.999878 MiB 00:13:36.163 element at address: 0x200003e00000 with size: 0.996277 MiB 00:13:36.163 element at address: 0x200031c00000 with size: 0.994446 MiB 00:13:36.163 element at address: 0x200013800000 with size: 0.978699 MiB 00:13:36.163 element at address: 0x200007000000 with size: 0.959839 MiB 00:13:36.163 element at address: 0x200019200000 with size: 0.936584 MiB 00:13:36.163 element at address: 0x200000200000 with size: 0.836853 MiB 00:13:36.163 element at address: 0x20001aa00000 with size: 0.570618 MiB 00:13:36.163 element at address: 0x20000b200000 with size: 0.489441 MiB 00:13:36.163 element at address: 0x200000800000 with size: 0.486877 MiB 00:13:36.163 element at address: 0x200019400000 with size: 0.485657 MiB 00:13:36.163 element at address: 0x200027e00000 with size: 0.397949 MiB 00:13:36.163 element at address: 0x200003a00000 with size: 0.351501 MiB 00:13:36.163 list of standard malloc elements. size: 199.253418 MiB 00:13:36.163 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:13:36.163 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:13:36.163 element at address: 0x200018efff80 with size: 1.000122 MiB 00:13:36.163 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:13:36.163 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:13:36.163 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:13:36.163 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:13:36.163 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:13:36.163 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:13:36.163 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:13:36.163 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003adb300 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003adb500 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003affa80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003affb40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:13:36.164 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:13:36.164 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:13:36.164 list of memzone associated elements. size: 602.262573 MiB 00:13:36.164 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:13:36.164 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:13:36.164 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:13:36.164 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:13:36.164 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:13:36.164 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74489_0 00:13:36.164 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:13:36.164 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74489_0 00:13:36.164 element at address: 0x200003fff380 with size: 48.003052 MiB 00:13:36.164 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74489_0 00:13:36.164 element at address: 0x2000195be940 with size: 20.255554 MiB 00:13:36.164 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:13:36.164 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:13:36.164 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:13:36.164 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:13:36.164 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74489 00:13:36.164 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:13:36.164 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74489 00:13:36.164 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:13:36.164 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74489 00:13:36.164 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:13:36.164 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:13:36.164 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:13:36.164 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:13:36.164 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:13:36.164 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:13:36.164 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:13:36.164 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:13:36.164 element at address: 0x200003eff180 with size: 1.000488 MiB 00:13:36.164 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74489 00:13:36.164 element at address: 0x200003affc00 with size: 1.000488 MiB 00:13:36.164 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74489 00:13:36.164 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:13:36.164 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74489 00:13:36.164 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:13:36.164 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74489 00:13:36.164 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:13:36.164 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74489 00:13:36.164 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:13:36.164 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:13:36.164 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:13:36.164 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:13:36.164 element at address: 0x20001947c540 with size: 0.250488 MiB 00:13:36.164 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:13:36.164 element at address: 0x200003adf880 with size: 0.125488 MiB 00:13:36.164 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74489 00:13:36.164 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:13:36.164 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:13:36.164 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:13:36.164 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:13:36.164 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:13:36.164 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74489 00:13:36.164 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:13:36.164 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:13:36.164 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:13:36.164 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74489 00:13:36.164 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:13:36.164 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74489 00:13:36.164 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:13:36.164 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:13:36.164 11:01:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:13:36.164 11:01:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74489 00:13:36.164 11:01:04 -- common/autotest_common.sh@936 -- # '[' -z 74489 ']' 00:13:36.164 11:01:04 -- common/autotest_common.sh@940 -- # kill -0 74489 00:13:36.164 11:01:04 -- common/autotest_common.sh@941 -- # uname 00:13:36.164 11:01:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.164 11:01:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74489 00:13:36.164 11:01:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:36.164 11:01:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:36.164 11:01:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74489' 00:13:36.164 killing process with pid 74489 00:13:36.164 11:01:04 -- common/autotest_common.sh@955 -- # kill 74489 00:13:36.164 11:01:04 -- common/autotest_common.sh@960 -- # wait 74489 00:13:36.731 00:13:36.731 real 0m1.668s 00:13:36.731 user 0m1.785s 00:13:36.731 sys 0m0.436s 00:13:36.731 11:01:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:36.731 ************************************ 00:13:36.731 END TEST dpdk_mem_utility 00:13:36.731 ************************************ 00:13:36.731 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.731 11:01:05 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:36.731 11:01:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:36.731 11:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.731 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.731 ************************************ 00:13:36.731 START TEST event 00:13:36.731 ************************************ 00:13:36.731 11:01:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:36.731 * Looking for test storage... 00:13:36.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:36.731 11:01:05 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:36.731 11:01:05 -- bdev/nbd_common.sh@6 -- # set -e 00:13:36.731 11:01:05 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:36.731 11:01:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:13:36.731 11:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.731 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:13:36.989 ************************************ 00:13:36.989 START TEST event_perf 00:13:36.989 ************************************ 00:13:36.989 11:01:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:36.989 Running I/O for 1 seconds...[2024-04-18 11:01:05.405640] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:36.989 [2024-04-18 11:01:05.405737] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74588 ] 00:13:36.989 [2024-04-18 11:01:05.539444] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.246 [2024-04-18 11:01:05.641742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.246 [2024-04-18 11:01:05.641862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.246 [2024-04-18 11:01:05.641922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.246 [2024-04-18 11:01:05.641929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.179 Running I/O for 1 seconds... 00:13:38.179 lcore 0: 179252 00:13:38.179 lcore 1: 179251 00:13:38.179 lcore 2: 179250 00:13:38.179 lcore 3: 179251 00:13:38.179 done. 00:13:38.179 00:13:38.179 real 0m1.332s 00:13:38.179 user 0m4.146s 00:13:38.179 sys 0m0.065s 00:13:38.179 11:01:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:38.179 11:01:06 -- common/autotest_common.sh@10 -- # set +x 00:13:38.179 ************************************ 00:13:38.179 END TEST event_perf 00:13:38.179 ************************************ 00:13:38.179 11:01:06 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:38.179 11:01:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:38.179 11:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.179 11:01:06 -- common/autotest_common.sh@10 -- # set +x 00:13:38.438 ************************************ 00:13:38.439 START TEST event_reactor 00:13:38.439 ************************************ 00:13:38.439 11:01:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:38.439 [2024-04-18 11:01:06.849327] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:38.439 [2024-04-18 11:01:06.849420] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74631 ] 00:13:38.439 [2024-04-18 11:01:06.988677] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.695 [2024-04-18 11:01:07.087598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.665 test_start 00:13:39.665 oneshot 00:13:39.665 tick 100 00:13:39.665 tick 100 00:13:39.665 tick 250 00:13:39.665 tick 100 00:13:39.665 tick 100 00:13:39.665 tick 250 00:13:39.665 tick 100 00:13:39.665 tick 500 00:13:39.665 tick 100 00:13:39.665 tick 100 00:13:39.665 tick 250 00:13:39.665 tick 100 00:13:39.665 tick 100 00:13:39.665 test_end 00:13:39.665 00:13:39.665 real 0m1.332s 00:13:39.665 user 0m1.167s 00:13:39.665 sys 0m0.058s 00:13:39.665 11:01:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.665 11:01:08 -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 ************************************ 00:13:39.665 END TEST event_reactor 00:13:39.665 ************************************ 00:13:39.665 11:01:08 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:39.665 11:01:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:39.665 11:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.665 11:01:08 -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 ************************************ 00:13:39.665 START TEST event_reactor_perf 00:13:39.665 ************************************ 00:13:39.665 11:01:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:39.665 [2024-04-18 11:01:08.292988] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:39.665 [2024-04-18 11:01:08.293354] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74670 ] 00:13:39.923 [2024-04-18 11:01:08.434929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.923 [2024-04-18 11:01:08.533256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.295 test_start 00:13:41.295 test_end 00:13:41.295 Performance: 369373 events per second 00:13:41.295 ************************************ 00:13:41.295 END TEST event_reactor_perf 00:13:41.295 ************************************ 00:13:41.295 00:13:41.295 real 0m1.332s 00:13:41.295 user 0m1.162s 00:13:41.295 sys 0m0.063s 00:13:41.295 11:01:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.295 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:13:41.295 11:01:09 -- event/event.sh@49 -- # uname -s 00:13:41.295 11:01:09 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:13:41.295 11:01:09 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:41.295 11:01:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.295 11:01:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.295 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:13:41.295 ************************************ 00:13:41.295 START TEST event_scheduler 00:13:41.295 ************************************ 00:13:41.295 11:01:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:41.295 * Looking for test storage... 00:13:41.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:13:41.295 11:01:09 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:13:41.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.295 11:01:09 -- scheduler/scheduler.sh@35 -- # scheduler_pid=74737 00:13:41.295 11:01:09 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:13:41.295 11:01:09 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:13:41.295 11:01:09 -- scheduler/scheduler.sh@37 -- # waitforlisten 74737 00:13:41.295 11:01:09 -- common/autotest_common.sh@817 -- # '[' -z 74737 ']' 00:13:41.295 11:01:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.295 11:01:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.295 11:01:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.295 11:01:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.295 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:13:41.295 [2024-04-18 11:01:09.861505] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:41.295 [2024-04-18 11:01:09.861820] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74737 ] 00:13:41.552 [2024-04-18 11:01:10.002557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.552 [2024-04-18 11:01:10.106080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.552 [2024-04-18 11:01:10.106173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.552 [2024-04-18 11:01:10.106219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.552 [2024-04-18 11:01:10.106455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.487 11:01:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:42.487 11:01:10 -- common/autotest_common.sh@850 -- # return 0 00:13:42.487 11:01:10 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:13:42.487 11:01:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.487 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.487 POWER: Env isn't set yet! 00:13:42.487 POWER: Attempting to initialise ACPI cpufreq power management... 00:13:42.487 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:42.487 POWER: Cannot set governor of lcore 0 to userspace 00:13:42.487 POWER: Attempting to initialise PSTAT power management... 00:13:42.487 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:42.487 POWER: Cannot set governor of lcore 0 to performance 00:13:42.487 POWER: Attempting to initialise AMD PSTATE power management... 00:13:42.487 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:42.487 POWER: Cannot set governor of lcore 0 to userspace 00:13:42.487 POWER: Attempting to initialise CPPC power management... 00:13:42.487 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:42.487 POWER: Cannot set governor of lcore 0 to userspace 00:13:42.487 POWER: Attempting to initialise VM power management... 00:13:42.487 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:13:42.487 POWER: Unable to set Power Management Environment for lcore 0 00:13:42.487 [2024-04-18 11:01:10.912139] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:13:42.487 [2024-04-18 11:01:10.912154] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:13:42.487 [2024-04-18 11:01:10.912163] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:13:42.487 11:01:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.487 11:01:10 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:13:42.487 11:01:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.487 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.487 [2024-04-18 11:01:11.008607] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:13:42.487 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.487 11:01:11 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:13:42.487 11:01:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:42.487 11:01:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:42.487 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.487 ************************************ 00:13:42.487 START TEST scheduler_create_thread 00:13:42.487 ************************************ 00:13:42.487 11:01:11 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:13:42.487 11:01:11 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:13:42.487 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.487 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.487 2 00:13:42.487 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.487 11:01:11 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:13:42.487 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.487 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.487 3 00:13:42.487 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.487 11:01:11 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:13:42.487 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.487 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.487 4 00:13:42.487 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.487 11:01:11 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:13:42.487 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.487 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.746 5 00:13:42.746 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:13:42.746 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.746 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.746 6 00:13:42.746 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:13:42.746 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.746 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.746 7 00:13:42.746 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:13:42.746 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.746 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.746 8 00:13:42.746 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:13:42.746 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.746 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.746 9 00:13:42.746 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:13:42.746 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.746 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.746 10 00:13:42.746 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:13:42.746 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.746 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:42.746 11:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:13:42.746 11:01:11 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:13:42.746 11:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.746 11:01:11 -- common/autotest_common.sh@10 -- # set +x 00:13:43.679 11:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.679 11:01:12 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:13:43.679 11:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.679 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:13:45.051 11:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.051 11:01:13 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:13:45.051 11:01:13 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:13:45.051 11:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.051 11:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:45.983 ************************************ 00:13:45.983 END TEST scheduler_create_thread 00:13:45.983 ************************************ 00:13:45.983 11:01:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.983 00:13:45.983 real 0m3.374s 00:13:45.983 user 0m0.015s 00:13:45.983 sys 0m0.007s 00:13:45.983 11:01:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.983 11:01:14 -- common/autotest_common.sh@10 -- # set +x 00:13:45.983 11:01:14 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:45.983 11:01:14 -- scheduler/scheduler.sh@46 -- # killprocess 74737 00:13:45.983 11:01:14 -- common/autotest_common.sh@936 -- # '[' -z 74737 ']' 00:13:45.983 11:01:14 -- common/autotest_common.sh@940 -- # kill -0 74737 00:13:45.983 11:01:14 -- common/autotest_common.sh@941 -- # uname 00:13:45.983 11:01:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.983 11:01:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74737 00:13:45.983 killing process with pid 74737 00:13:45.983 11:01:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:45.983 11:01:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:45.983 11:01:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74737' 00:13:45.983 11:01:14 -- common/autotest_common.sh@955 -- # kill 74737 00:13:45.983 11:01:14 -- common/autotest_common.sh@960 -- # wait 74737 00:13:46.242 [2024-04-18 11:01:14.837481] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:13:46.513 ************************************ 00:13:46.513 END TEST event_scheduler 00:13:46.513 ************************************ 00:13:46.513 00:13:46.513 real 0m5.375s 00:13:46.513 user 0m11.218s 00:13:46.513 sys 0m0.436s 00:13:46.513 11:01:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.513 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:13:46.513 11:01:15 -- event/event.sh@51 -- # modprobe -n nbd 00:13:46.513 11:01:15 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:13:46.513 11:01:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:46.514 11:01:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.514 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:13:46.773 ************************************ 00:13:46.773 START TEST app_repeat 00:13:46.773 ************************************ 00:13:46.773 11:01:15 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:13:46.773 11:01:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:46.773 11:01:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:46.773 11:01:15 -- event/event.sh@13 -- # local nbd_list 00:13:46.773 11:01:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:46.773 11:01:15 -- event/event.sh@14 -- # local bdev_list 00:13:46.773 11:01:15 -- event/event.sh@15 -- # local repeat_times=4 00:13:46.773 11:01:15 -- event/event.sh@17 -- # modprobe nbd 00:13:46.773 Process app_repeat pid: 74870 00:13:46.773 spdk_app_start Round 0 00:13:46.773 11:01:15 -- event/event.sh@19 -- # repeat_pid=74870 00:13:46.773 11:01:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:13:46.773 11:01:15 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:13:46.773 11:01:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74870' 00:13:46.773 11:01:15 -- event/event.sh@23 -- # for i in {0..2} 00:13:46.773 11:01:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:13:46.773 11:01:15 -- event/event.sh@25 -- # waitforlisten 74870 /var/tmp/spdk-nbd.sock 00:13:46.773 11:01:15 -- common/autotest_common.sh@817 -- # '[' -z 74870 ']' 00:13:46.773 11:01:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:46.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:46.773 11:01:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:46.773 11:01:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:46.773 11:01:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:46.773 11:01:15 -- common/autotest_common.sh@10 -- # set +x 00:13:46.773 [2024-04-18 11:01:15.246286] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:46.773 [2024-04-18 11:01:15.246599] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74870 ] 00:13:46.773 [2024-04-18 11:01:15.388213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.032 [2024-04-18 11:01:15.487433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.032 [2024-04-18 11:01:15.487453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.014 11:01:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.014 11:01:16 -- common/autotest_common.sh@850 -- # return 0 00:13:48.014 11:01:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:48.014 Malloc0 00:13:48.014 11:01:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:48.274 Malloc1 00:13:48.274 11:01:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@12 -- # local i 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.274 11:01:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:48.532 /dev/nbd0 00:13:48.532 11:01:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.532 11:01:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.532 11:01:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:48.532 11:01:17 -- common/autotest_common.sh@855 -- # local i 00:13:48.532 11:01:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:48.532 11:01:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:48.532 11:01:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:48.532 11:01:17 -- common/autotest_common.sh@859 -- # break 00:13:48.532 11:01:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:48.532 11:01:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:48.532 11:01:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:48.532 1+0 records in 00:13:48.532 1+0 records out 00:13:48.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296052 s, 13.8 MB/s 00:13:48.532 11:01:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:48.532 11:01:17 -- common/autotest_common.sh@872 -- # size=4096 00:13:48.532 11:01:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:48.532 11:01:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:48.532 11:01:17 -- common/autotest_common.sh@875 -- # return 0 00:13:48.532 11:01:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.532 11:01:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.532 11:01:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:49.096 /dev/nbd1 00:13:49.096 11:01:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:49.096 11:01:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:49.096 11:01:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:49.096 11:01:17 -- common/autotest_common.sh@855 -- # local i 00:13:49.096 11:01:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:49.096 11:01:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:49.096 11:01:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:49.096 11:01:17 -- common/autotest_common.sh@859 -- # break 00:13:49.096 11:01:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:49.096 11:01:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:49.096 11:01:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:49.096 1+0 records in 00:13:49.096 1+0 records out 00:13:49.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324745 s, 12.6 MB/s 00:13:49.096 11:01:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:49.096 11:01:17 -- common/autotest_common.sh@872 -- # size=4096 00:13:49.096 11:01:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:49.096 11:01:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:49.096 11:01:17 -- common/autotest_common.sh@875 -- # return 0 00:13:49.097 11:01:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.097 11:01:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:49.097 11:01:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:49.097 11:01:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:49.097 11:01:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:49.355 { 00:13:49.355 "bdev_name": "Malloc0", 00:13:49.355 "nbd_device": "/dev/nbd0" 00:13:49.355 }, 00:13:49.355 { 00:13:49.355 "bdev_name": "Malloc1", 00:13:49.355 "nbd_device": "/dev/nbd1" 00:13:49.355 } 00:13:49.355 ]' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:49.355 { 00:13:49.355 "bdev_name": "Malloc0", 00:13:49.355 "nbd_device": "/dev/nbd0" 00:13:49.355 }, 00:13:49.355 { 00:13:49.355 "bdev_name": "Malloc1", 00:13:49.355 "nbd_device": "/dev/nbd1" 00:13:49.355 } 00:13:49.355 ]' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:49.355 /dev/nbd1' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:49.355 /dev/nbd1' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@65 -- # count=2 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@95 -- # count=2 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:49.355 256+0 records in 00:13:49.355 256+0 records out 00:13:49.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767303 s, 137 MB/s 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:49.355 256+0 records in 00:13:49.355 256+0 records out 00:13:49.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246971 s, 42.5 MB/s 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:49.355 256+0 records in 00:13:49.355 256+0 records out 00:13:49.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306547 s, 34.2 MB/s 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@51 -- # local i 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.355 11:01:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@41 -- # break 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.614 11:01:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:49.873 11:01:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:49.873 11:01:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:49.873 11:01:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:49.873 11:01:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.873 11:01:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.873 11:01:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.132 11:01:18 -- bdev/nbd_common.sh@41 -- # break 00:13:50.132 11:01:18 -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.132 11:01:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:50.132 11:01:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:50.132 11:01:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@65 -- # true 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@65 -- # count=0 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@104 -- # count=0 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:50.391 11:01:18 -- bdev/nbd_common.sh@109 -- # return 0 00:13:50.391 11:01:18 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:50.649 11:01:19 -- event/event.sh@35 -- # sleep 3 00:13:50.908 [2024-04-18 11:01:19.314054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.908 [2024-04-18 11:01:19.394768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.908 [2024-04-18 11:01:19.394774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.908 [2024-04-18 11:01:19.449868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:50.908 [2024-04-18 11:01:19.449931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:54.222 11:01:22 -- event/event.sh@23 -- # for i in {0..2} 00:13:54.222 spdk_app_start Round 1 00:13:54.222 11:01:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:13:54.222 11:01:22 -- event/event.sh@25 -- # waitforlisten 74870 /var/tmp/spdk-nbd.sock 00:13:54.222 11:01:22 -- common/autotest_common.sh@817 -- # '[' -z 74870 ']' 00:13:54.223 11:01:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:54.223 11:01:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:54.223 11:01:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:54.223 11:01:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.223 11:01:22 -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 11:01:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:54.223 11:01:22 -- common/autotest_common.sh@850 -- # return 0 00:13:54.223 11:01:22 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:54.223 Malloc0 00:13:54.223 11:01:22 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:54.482 Malloc1 00:13:54.482 11:01:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@12 -- # local i 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:54.482 11:01:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:54.740 /dev/nbd0 00:13:54.740 11:01:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.740 11:01:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.740 11:01:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:54.740 11:01:23 -- common/autotest_common.sh@855 -- # local i 00:13:54.740 11:01:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:54.740 11:01:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:54.740 11:01:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:54.740 11:01:23 -- common/autotest_common.sh@859 -- # break 00:13:54.740 11:01:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:54.740 11:01:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:54.740 11:01:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:54.740 1+0 records in 00:13:54.740 1+0 records out 00:13:54.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241037 s, 17.0 MB/s 00:13:54.740 11:01:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:54.740 11:01:23 -- common/autotest_common.sh@872 -- # size=4096 00:13:54.740 11:01:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:54.740 11:01:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:54.740 11:01:23 -- common/autotest_common.sh@875 -- # return 0 00:13:54.740 11:01:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.740 11:01:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:54.740 11:01:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:54.999 /dev/nbd1 00:13:54.999 11:01:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:54.999 11:01:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:54.999 11:01:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:54.999 11:01:23 -- common/autotest_common.sh@855 -- # local i 00:13:54.999 11:01:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:54.999 11:01:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:54.999 11:01:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:54.999 11:01:23 -- common/autotest_common.sh@859 -- # break 00:13:54.999 11:01:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:54.999 11:01:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:54.999 11:01:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:54.999 1+0 records in 00:13:54.999 1+0 records out 00:13:54.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342617 s, 12.0 MB/s 00:13:54.999 11:01:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:54.999 11:01:23 -- common/autotest_common.sh@872 -- # size=4096 00:13:54.999 11:01:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:54.999 11:01:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:54.999 11:01:23 -- common/autotest_common.sh@875 -- # return 0 00:13:54.999 11:01:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.999 11:01:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:54.999 11:01:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:54.999 11:01:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:54.999 11:01:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:55.257 11:01:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:55.257 { 00:13:55.257 "bdev_name": "Malloc0", 00:13:55.257 "nbd_device": "/dev/nbd0" 00:13:55.257 }, 00:13:55.257 { 00:13:55.257 "bdev_name": "Malloc1", 00:13:55.257 "nbd_device": "/dev/nbd1" 00:13:55.257 } 00:13:55.257 ]' 00:13:55.257 11:01:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:55.257 { 00:13:55.257 "bdev_name": "Malloc0", 00:13:55.257 "nbd_device": "/dev/nbd0" 00:13:55.257 }, 00:13:55.257 { 00:13:55.257 "bdev_name": "Malloc1", 00:13:55.257 "nbd_device": "/dev/nbd1" 00:13:55.257 } 00:13:55.257 ]' 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:55.258 /dev/nbd1' 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:55.258 /dev/nbd1' 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@65 -- # count=2 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@95 -- # count=2 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:55.258 256+0 records in 00:13:55.258 256+0 records out 00:13:55.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773376 s, 136 MB/s 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:55.258 11:01:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:55.517 256+0 records in 00:13:55.517 256+0 records out 00:13:55.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255701 s, 41.0 MB/s 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:55.517 256+0 records in 00:13:55.517 256+0 records out 00:13:55.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270577 s, 38.8 MB/s 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@51 -- # local i 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.517 11:01:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@41 -- # break 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.776 11:01:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@41 -- # break 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.035 11:01:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@65 -- # true 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@65 -- # count=0 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@104 -- # count=0 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:56.294 11:01:24 -- bdev/nbd_common.sh@109 -- # return 0 00:13:56.294 11:01:24 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:56.865 11:01:25 -- event/event.sh@35 -- # sleep 3 00:13:56.865 [2024-04-18 11:01:25.402110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.865 [2024-04-18 11:01:25.485904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.865 [2024-04-18 11:01:25.485913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.124 [2024-04-18 11:01:25.541174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:57.124 [2024-04-18 11:01:25.541239] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:59.654 11:01:28 -- event/event.sh@23 -- # for i in {0..2} 00:13:59.654 spdk_app_start Round 2 00:13:59.654 11:01:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:13:59.654 11:01:28 -- event/event.sh@25 -- # waitforlisten 74870 /var/tmp/spdk-nbd.sock 00:13:59.654 11:01:28 -- common/autotest_common.sh@817 -- # '[' -z 74870 ']' 00:13:59.654 11:01:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:59.654 11:01:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:59.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:59.654 11:01:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:59.654 11:01:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:59.654 11:01:28 -- common/autotest_common.sh@10 -- # set +x 00:13:59.913 11:01:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:59.913 11:01:28 -- common/autotest_common.sh@850 -- # return 0 00:13:59.913 11:01:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:00.171 Malloc0 00:14:00.171 11:01:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:00.430 Malloc1 00:14:00.430 11:01:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@12 -- # local i 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:00.430 11:01:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:00.689 /dev/nbd0 00:14:00.689 11:01:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.689 11:01:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.689 11:01:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:14:00.689 11:01:29 -- common/autotest_common.sh@855 -- # local i 00:14:00.689 11:01:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:00.689 11:01:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:00.689 11:01:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:14:00.689 11:01:29 -- common/autotest_common.sh@859 -- # break 00:14:00.689 11:01:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:00.689 11:01:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:00.689 11:01:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:00.689 1+0 records in 00:14:00.689 1+0 records out 00:14:00.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238791 s, 17.2 MB/s 00:14:00.689 11:01:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:00.689 11:01:29 -- common/autotest_common.sh@872 -- # size=4096 00:14:00.689 11:01:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:00.689 11:01:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:00.689 11:01:29 -- common/autotest_common.sh@875 -- # return 0 00:14:00.689 11:01:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.689 11:01:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:00.689 11:01:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:00.948 /dev/nbd1 00:14:00.948 11:01:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:00.948 11:01:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:00.948 11:01:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:14:00.948 11:01:29 -- common/autotest_common.sh@855 -- # local i 00:14:00.948 11:01:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:00.948 11:01:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:00.948 11:01:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:14:00.948 11:01:29 -- common/autotest_common.sh@859 -- # break 00:14:00.948 11:01:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:00.948 11:01:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:00.948 11:01:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:00.948 1+0 records in 00:14:00.948 1+0 records out 00:14:00.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247516 s, 16.5 MB/s 00:14:00.948 11:01:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:00.948 11:01:29 -- common/autotest_common.sh@872 -- # size=4096 00:14:00.948 11:01:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:00.948 11:01:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:00.948 11:01:29 -- common/autotest_common.sh@875 -- # return 0 00:14:00.948 11:01:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.948 11:01:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:00.948 11:01:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:00.948 11:01:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.948 11:01:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:01.206 11:01:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:01.206 { 00:14:01.206 "bdev_name": "Malloc0", 00:14:01.206 "nbd_device": "/dev/nbd0" 00:14:01.206 }, 00:14:01.206 { 00:14:01.206 "bdev_name": "Malloc1", 00:14:01.206 "nbd_device": "/dev/nbd1" 00:14:01.206 } 00:14:01.206 ]' 00:14:01.206 11:01:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:01.206 { 00:14:01.206 "bdev_name": "Malloc0", 00:14:01.206 "nbd_device": "/dev/nbd0" 00:14:01.206 }, 00:14:01.206 { 00:14:01.206 "bdev_name": "Malloc1", 00:14:01.206 "nbd_device": "/dev/nbd1" 00:14:01.206 } 00:14:01.206 ]' 00:14:01.206 11:01:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:01.464 /dev/nbd1' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:01.464 /dev/nbd1' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@65 -- # count=2 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@95 -- # count=2 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:01.464 256+0 records in 00:14:01.464 256+0 records out 00:14:01.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100328 s, 105 MB/s 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:01.464 256+0 records in 00:14:01.464 256+0 records out 00:14:01.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243378 s, 43.1 MB/s 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:01.464 256+0 records in 00:14:01.464 256+0 records out 00:14:01.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261435 s, 40.1 MB/s 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@51 -- # local i 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.464 11:01:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:01.722 11:01:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.722 11:01:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.722 11:01:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.723 11:01:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.723 11:01:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.723 11:01:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.723 11:01:30 -- bdev/nbd_common.sh@41 -- # break 00:14:01.723 11:01:30 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.723 11:01:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.723 11:01:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@41 -- # break 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.981 11:01:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@65 -- # true 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@65 -- # count=0 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@104 -- # count=0 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:02.240 11:01:30 -- bdev/nbd_common.sh@109 -- # return 0 00:14:02.240 11:01:30 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:02.810 11:01:31 -- event/event.sh@35 -- # sleep 3 00:14:02.810 [2024-04-18 11:01:31.355504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:02.810 [2024-04-18 11:01:31.441865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.810 [2024-04-18 11:01:31.441878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.075 [2024-04-18 11:01:31.496666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:03.075 [2024-04-18 11:01:31.496729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:05.607 11:01:34 -- event/event.sh@38 -- # waitforlisten 74870 /var/tmp/spdk-nbd.sock 00:14:05.607 11:01:34 -- common/autotest_common.sh@817 -- # '[' -z 74870 ']' 00:14:05.607 11:01:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:05.607 11:01:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:05.607 11:01:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:05.607 11:01:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.607 11:01:34 -- common/autotest_common.sh@10 -- # set +x 00:14:05.865 11:01:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.865 11:01:34 -- common/autotest_common.sh@850 -- # return 0 00:14:05.865 11:01:34 -- event/event.sh@39 -- # killprocess 74870 00:14:05.865 11:01:34 -- common/autotest_common.sh@936 -- # '[' -z 74870 ']' 00:14:05.865 11:01:34 -- common/autotest_common.sh@940 -- # kill -0 74870 00:14:05.865 11:01:34 -- common/autotest_common.sh@941 -- # uname 00:14:05.865 11:01:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.865 11:01:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74870 00:14:05.865 11:01:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.865 11:01:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.865 killing process with pid 74870 00:14:05.865 11:01:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74870' 00:14:05.865 11:01:34 -- common/autotest_common.sh@955 -- # kill 74870 00:14:05.865 11:01:34 -- common/autotest_common.sh@960 -- # wait 74870 00:14:06.133 spdk_app_start is called in Round 0. 00:14:06.133 Shutdown signal received, stop current app iteration 00:14:06.133 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:14:06.133 spdk_app_start is called in Round 1. 00:14:06.133 Shutdown signal received, stop current app iteration 00:14:06.133 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:14:06.133 spdk_app_start is called in Round 2. 00:14:06.133 Shutdown signal received, stop current app iteration 00:14:06.133 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:14:06.133 spdk_app_start is called in Round 3. 00:14:06.133 Shutdown signal received, stop current app iteration 00:14:06.133 11:01:34 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:14:06.133 11:01:34 -- event/event.sh@42 -- # return 0 00:14:06.133 00:14:06.133 real 0m19.423s 00:14:06.133 user 0m43.637s 00:14:06.133 sys 0m3.175s 00:14:06.133 11:01:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.133 ************************************ 00:14:06.133 END TEST app_repeat 00:14:06.133 11:01:34 -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 ************************************ 00:14:06.133 11:01:34 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:14:06.133 11:01:34 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:06.133 11:01:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:06.133 11:01:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.133 11:01:34 -- common/autotest_common.sh@10 -- # set +x 00:14:06.133 ************************************ 00:14:06.133 START TEST cpu_locks 00:14:06.133 ************************************ 00:14:06.133 11:01:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:06.392 * Looking for test storage... 00:14:06.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:06.392 11:01:34 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:14:06.392 11:01:34 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:14:06.392 11:01:34 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:14:06.392 11:01:34 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:14:06.392 11:01:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:06.392 11:01:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.392 11:01:34 -- common/autotest_common.sh@10 -- # set +x 00:14:06.392 ************************************ 00:14:06.392 START TEST default_locks 00:14:06.392 ************************************ 00:14:06.392 11:01:34 -- common/autotest_common.sh@1111 -- # default_locks 00:14:06.392 11:01:34 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75511 00:14:06.392 11:01:34 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:06.392 11:01:34 -- event/cpu_locks.sh@47 -- # waitforlisten 75511 00:14:06.392 11:01:34 -- common/autotest_common.sh@817 -- # '[' -z 75511 ']' 00:14:06.392 11:01:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.392 11:01:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:06.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.392 11:01:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.392 11:01:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:06.392 11:01:34 -- common/autotest_common.sh@10 -- # set +x 00:14:06.392 [2024-04-18 11:01:34.999867] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:06.392 [2024-04-18 11:01:34.999980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75511 ] 00:14:06.651 [2024-04-18 11:01:35.137616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.651 [2024-04-18 11:01:35.240514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.588 11:01:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:07.588 11:01:35 -- common/autotest_common.sh@850 -- # return 0 00:14:07.588 11:01:35 -- event/cpu_locks.sh@49 -- # locks_exist 75511 00:14:07.588 11:01:35 -- event/cpu_locks.sh@22 -- # lslocks -p 75511 00:14:07.588 11:01:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:07.845 11:01:36 -- event/cpu_locks.sh@50 -- # killprocess 75511 00:14:07.845 11:01:36 -- common/autotest_common.sh@936 -- # '[' -z 75511 ']' 00:14:07.845 11:01:36 -- common/autotest_common.sh@940 -- # kill -0 75511 00:14:07.845 11:01:36 -- common/autotest_common.sh@941 -- # uname 00:14:07.846 11:01:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:07.846 11:01:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75511 00:14:07.846 11:01:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:07.846 11:01:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:07.846 killing process with pid 75511 00:14:07.846 11:01:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75511' 00:14:07.846 11:01:36 -- common/autotest_common.sh@955 -- # kill 75511 00:14:07.846 11:01:36 -- common/autotest_common.sh@960 -- # wait 75511 00:14:08.104 11:01:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75511 00:14:08.104 11:01:36 -- common/autotest_common.sh@638 -- # local es=0 00:14:08.363 11:01:36 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 75511 00:14:08.363 11:01:36 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:14:08.363 11:01:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:08.363 11:01:36 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:14:08.363 11:01:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:08.363 11:01:36 -- common/autotest_common.sh@641 -- # waitforlisten 75511 00:14:08.363 11:01:36 -- common/autotest_common.sh@817 -- # '[' -z 75511 ']' 00:14:08.363 11:01:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.363 11:01:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:08.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.363 11:01:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.363 11:01:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:08.363 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:14:08.363 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (75511) - No such process 00:14:08.363 ERROR: process (pid: 75511) is no longer running 00:14:08.363 11:01:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:08.363 11:01:36 -- common/autotest_common.sh@850 -- # return 1 00:14:08.363 11:01:36 -- common/autotest_common.sh@641 -- # es=1 00:14:08.363 11:01:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:08.363 11:01:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:08.364 11:01:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:08.364 11:01:36 -- event/cpu_locks.sh@54 -- # no_locks 00:14:08.364 11:01:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:08.364 11:01:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:14:08.364 11:01:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:08.364 00:14:08.364 real 0m1.823s 00:14:08.364 user 0m1.924s 00:14:08.364 sys 0m0.556s 00:14:08.364 11:01:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:08.364 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:14:08.364 ************************************ 00:14:08.364 END TEST default_locks 00:14:08.364 ************************************ 00:14:08.364 11:01:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:14:08.364 11:01:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:08.364 11:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:08.364 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:14:08.364 ************************************ 00:14:08.364 START TEST default_locks_via_rpc 00:14:08.364 ************************************ 00:14:08.364 11:01:36 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:14:08.364 11:01:36 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:08.364 11:01:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75579 00:14:08.364 11:01:36 -- event/cpu_locks.sh@63 -- # waitforlisten 75579 00:14:08.364 11:01:36 -- common/autotest_common.sh@817 -- # '[' -z 75579 ']' 00:14:08.364 11:01:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.364 11:01:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:08.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.364 11:01:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.364 11:01:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:08.364 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:14:08.364 [2024-04-18 11:01:36.935005] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:08.364 [2024-04-18 11:01:36.935128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75579 ] 00:14:08.622 [2024-04-18 11:01:37.073484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.622 [2024-04-18 11:01:37.179171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.559 11:01:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.559 11:01:37 -- common/autotest_common.sh@850 -- # return 0 00:14:09.559 11:01:37 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:14:09.559 11:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.559 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:14:09.559 11:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.559 11:01:37 -- event/cpu_locks.sh@67 -- # no_locks 00:14:09.559 11:01:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:09.559 11:01:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:14:09.559 11:01:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:09.559 11:01:37 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:14:09.559 11:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.559 11:01:37 -- common/autotest_common.sh@10 -- # set +x 00:14:09.559 11:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.559 11:01:37 -- event/cpu_locks.sh@71 -- # locks_exist 75579 00:14:09.559 11:01:37 -- event/cpu_locks.sh@22 -- # lslocks -p 75579 00:14:09.559 11:01:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:09.817 11:01:38 -- event/cpu_locks.sh@73 -- # killprocess 75579 00:14:09.817 11:01:38 -- common/autotest_common.sh@936 -- # '[' -z 75579 ']' 00:14:09.817 11:01:38 -- common/autotest_common.sh@940 -- # kill -0 75579 00:14:09.817 11:01:38 -- common/autotest_common.sh@941 -- # uname 00:14:09.817 11:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.817 11:01:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75579 00:14:09.817 11:01:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:09.817 killing process with pid 75579 00:14:09.817 11:01:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:09.817 11:01:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75579' 00:14:09.817 11:01:38 -- common/autotest_common.sh@955 -- # kill 75579 00:14:09.817 11:01:38 -- common/autotest_common.sh@960 -- # wait 75579 00:14:10.384 00:14:10.384 real 0m1.839s 00:14:10.384 user 0m1.922s 00:14:10.384 sys 0m0.557s 00:14:10.384 11:01:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:10.384 11:01:38 -- common/autotest_common.sh@10 -- # set +x 00:14:10.384 ************************************ 00:14:10.384 END TEST default_locks_via_rpc 00:14:10.384 ************************************ 00:14:10.384 11:01:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:14:10.384 11:01:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:10.384 11:01:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:10.384 11:01:38 -- common/autotest_common.sh@10 -- # set +x 00:14:10.384 ************************************ 00:14:10.384 START TEST non_locking_app_on_locked_coremask 00:14:10.384 ************************************ 00:14:10.384 11:01:38 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:14:10.384 11:01:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75653 00:14:10.384 11:01:38 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:10.384 11:01:38 -- event/cpu_locks.sh@81 -- # waitforlisten 75653 /var/tmp/spdk.sock 00:14:10.384 11:01:38 -- common/autotest_common.sh@817 -- # '[' -z 75653 ']' 00:14:10.384 11:01:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.384 11:01:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:10.384 11:01:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.384 11:01:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:10.384 11:01:38 -- common/autotest_common.sh@10 -- # set +x 00:14:10.384 [2024-04-18 11:01:38.905221] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:10.384 [2024-04-18 11:01:38.905355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75653 ] 00:14:10.642 [2024-04-18 11:01:39.046344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.642 [2024-04-18 11:01:39.147072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.577 11:01:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.577 11:01:39 -- common/autotest_common.sh@850 -- # return 0 00:14:11.577 11:01:39 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75681 00:14:11.577 11:01:39 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:14:11.577 11:01:39 -- event/cpu_locks.sh@85 -- # waitforlisten 75681 /var/tmp/spdk2.sock 00:14:11.577 11:01:39 -- common/autotest_common.sh@817 -- # '[' -z 75681 ']' 00:14:11.577 11:01:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:11.577 11:01:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:11.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:11.577 11:01:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:11.577 11:01:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:11.577 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:14:11.577 [2024-04-18 11:01:39.938210] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:11.577 [2024-04-18 11:01:39.938308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75681 ] 00:14:11.577 [2024-04-18 11:01:40.078364] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:11.577 [2024-04-18 11:01:40.078426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.835 [2024-04-18 11:01:40.270168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.439 11:01:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:12.439 11:01:40 -- common/autotest_common.sh@850 -- # return 0 00:14:12.439 11:01:40 -- event/cpu_locks.sh@87 -- # locks_exist 75653 00:14:12.439 11:01:40 -- event/cpu_locks.sh@22 -- # lslocks -p 75653 00:14:12.439 11:01:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:13.372 11:01:41 -- event/cpu_locks.sh@89 -- # killprocess 75653 00:14:13.372 11:01:41 -- common/autotest_common.sh@936 -- # '[' -z 75653 ']' 00:14:13.372 11:01:41 -- common/autotest_common.sh@940 -- # kill -0 75653 00:14:13.372 11:01:41 -- common/autotest_common.sh@941 -- # uname 00:14:13.372 11:01:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.372 11:01:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75653 00:14:13.372 11:01:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:13.372 11:01:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:13.372 killing process with pid 75653 00:14:13.372 11:01:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75653' 00:14:13.372 11:01:41 -- common/autotest_common.sh@955 -- # kill 75653 00:14:13.372 11:01:41 -- common/autotest_common.sh@960 -- # wait 75653 00:14:13.936 11:01:42 -- event/cpu_locks.sh@90 -- # killprocess 75681 00:14:13.936 11:01:42 -- common/autotest_common.sh@936 -- # '[' -z 75681 ']' 00:14:13.936 11:01:42 -- common/autotest_common.sh@940 -- # kill -0 75681 00:14:13.936 11:01:42 -- common/autotest_common.sh@941 -- # uname 00:14:13.936 11:01:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.936 11:01:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75681 00:14:13.936 11:01:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:13.936 killing process with pid 75681 00:14:13.936 11:01:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:13.936 11:01:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75681' 00:14:13.936 11:01:42 -- common/autotest_common.sh@955 -- # kill 75681 00:14:13.936 11:01:42 -- common/autotest_common.sh@960 -- # wait 75681 00:14:14.194 00:14:14.194 real 0m3.985s 00:14:14.194 user 0m4.360s 00:14:14.194 sys 0m1.123s 00:14:14.194 11:01:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.194 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:14:14.194 ************************************ 00:14:14.194 END TEST non_locking_app_on_locked_coremask 00:14:14.194 ************************************ 00:14:14.453 11:01:42 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:14:14.453 11:01:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:14.453 11:01:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.453 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:14:14.453 ************************************ 00:14:14.453 START TEST locking_app_on_unlocked_coremask 00:14:14.453 ************************************ 00:14:14.453 11:01:42 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:14:14.453 11:01:42 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75764 00:14:14.453 11:01:42 -- event/cpu_locks.sh@99 -- # waitforlisten 75764 /var/tmp/spdk.sock 00:14:14.453 11:01:42 -- common/autotest_common.sh@817 -- # '[' -z 75764 ']' 00:14:14.453 11:01:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.453 11:01:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:14.453 11:01:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.453 11:01:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:14.453 11:01:42 -- common/autotest_common.sh@10 -- # set +x 00:14:14.453 11:01:42 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:14:14.453 [2024-04-18 11:01:43.016527] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:14.453 [2024-04-18 11:01:43.016635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75764 ] 00:14:14.711 [2024-04-18 11:01:43.158085] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:14.711 [2024-04-18 11:01:43.158136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.711 [2024-04-18 11:01:43.252641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.646 11:01:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:15.646 11:01:43 -- common/autotest_common.sh@850 -- # return 0 00:14:15.646 11:01:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75792 00:14:15.646 11:01:43 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:15.646 11:01:43 -- event/cpu_locks.sh@103 -- # waitforlisten 75792 /var/tmp/spdk2.sock 00:14:15.646 11:01:43 -- common/autotest_common.sh@817 -- # '[' -z 75792 ']' 00:14:15.646 11:01:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:15.646 11:01:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:15.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:15.646 11:01:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:15.646 11:01:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:15.646 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:14:15.646 [2024-04-18 11:01:44.059826] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:15.646 [2024-04-18 11:01:44.059946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75792 ] 00:14:15.646 [2024-04-18 11:01:44.206634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.904 [2024-04-18 11:01:44.402633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.470 11:01:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:16.470 11:01:45 -- common/autotest_common.sh@850 -- # return 0 00:14:16.470 11:01:45 -- event/cpu_locks.sh@105 -- # locks_exist 75792 00:14:16.470 11:01:45 -- event/cpu_locks.sh@22 -- # lslocks -p 75792 00:14:16.470 11:01:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:17.404 11:01:45 -- event/cpu_locks.sh@107 -- # killprocess 75764 00:14:17.404 11:01:45 -- common/autotest_common.sh@936 -- # '[' -z 75764 ']' 00:14:17.404 11:01:45 -- common/autotest_common.sh@940 -- # kill -0 75764 00:14:17.404 11:01:45 -- common/autotest_common.sh@941 -- # uname 00:14:17.404 11:01:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:17.404 11:01:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75764 00:14:17.404 11:01:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:17.404 killing process with pid 75764 00:14:17.404 11:01:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:17.404 11:01:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75764' 00:14:17.404 11:01:45 -- common/autotest_common.sh@955 -- # kill 75764 00:14:17.404 11:01:45 -- common/autotest_common.sh@960 -- # wait 75764 00:14:18.338 11:01:46 -- event/cpu_locks.sh@108 -- # killprocess 75792 00:14:18.338 11:01:46 -- common/autotest_common.sh@936 -- # '[' -z 75792 ']' 00:14:18.338 11:01:46 -- common/autotest_common.sh@940 -- # kill -0 75792 00:14:18.338 11:01:46 -- common/autotest_common.sh@941 -- # uname 00:14:18.338 11:01:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.338 11:01:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75792 00:14:18.338 11:01:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:18.338 killing process with pid 75792 00:14:18.338 11:01:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:18.338 11:01:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75792' 00:14:18.338 11:01:46 -- common/autotest_common.sh@955 -- # kill 75792 00:14:18.338 11:01:46 -- common/autotest_common.sh@960 -- # wait 75792 00:14:18.601 00:14:18.601 real 0m4.074s 00:14:18.601 user 0m4.570s 00:14:18.601 sys 0m1.093s 00:14:18.601 11:01:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:18.601 11:01:47 -- common/autotest_common.sh@10 -- # set +x 00:14:18.601 ************************************ 00:14:18.601 END TEST locking_app_on_unlocked_coremask 00:14:18.601 ************************************ 00:14:18.601 11:01:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:14:18.601 11:01:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:18.601 11:01:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.601 11:01:47 -- common/autotest_common.sh@10 -- # set +x 00:14:18.601 ************************************ 00:14:18.601 START TEST locking_app_on_locked_coremask 00:14:18.601 ************************************ 00:14:18.601 11:01:47 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:14:18.601 11:01:47 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:18.601 11:01:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75876 00:14:18.601 11:01:47 -- event/cpu_locks.sh@116 -- # waitforlisten 75876 /var/tmp/spdk.sock 00:14:18.601 11:01:47 -- common/autotest_common.sh@817 -- # '[' -z 75876 ']' 00:14:18.601 11:01:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.601 11:01:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.601 11:01:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.601 11:01:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.601 11:01:47 -- common/autotest_common.sh@10 -- # set +x 00:14:18.601 [2024-04-18 11:01:47.181088] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:18.601 [2024-04-18 11:01:47.181187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75876 ] 00:14:18.878 [2024-04-18 11:01:47.320286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.878 [2024-04-18 11:01:47.423417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.814 11:01:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.814 11:01:48 -- common/autotest_common.sh@850 -- # return 0 00:14:19.814 11:01:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75904 00:14:19.814 11:01:48 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:19.814 11:01:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75904 /var/tmp/spdk2.sock 00:14:19.814 11:01:48 -- common/autotest_common.sh@638 -- # local es=0 00:14:19.814 11:01:48 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 75904 /var/tmp/spdk2.sock 00:14:19.814 11:01:48 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:14:19.814 11:01:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:19.814 11:01:48 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:14:19.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:19.814 11:01:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:19.814 11:01:48 -- common/autotest_common.sh@641 -- # waitforlisten 75904 /var/tmp/spdk2.sock 00:14:19.814 11:01:48 -- common/autotest_common.sh@817 -- # '[' -z 75904 ']' 00:14:19.814 11:01:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:19.814 11:01:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.814 11:01:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:19.814 11:01:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.814 11:01:48 -- common/autotest_common.sh@10 -- # set +x 00:14:19.814 [2024-04-18 11:01:48.235649] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:19.814 [2024-04-18 11:01:48.235741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75904 ] 00:14:19.814 [2024-04-18 11:01:48.379394] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75876 has claimed it. 00:14:19.814 [2024-04-18 11:01:48.379465] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:20.382 ERROR: process (pid: 75904) is no longer running 00:14:20.382 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (75904) - No such process 00:14:20.382 11:01:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.382 11:01:48 -- common/autotest_common.sh@850 -- # return 1 00:14:20.382 11:01:48 -- common/autotest_common.sh@641 -- # es=1 00:14:20.382 11:01:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:20.382 11:01:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:20.382 11:01:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:20.382 11:01:48 -- event/cpu_locks.sh@122 -- # locks_exist 75876 00:14:20.382 11:01:48 -- event/cpu_locks.sh@22 -- # lslocks -p 75876 00:14:20.382 11:01:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:20.949 11:01:49 -- event/cpu_locks.sh@124 -- # killprocess 75876 00:14:20.949 11:01:49 -- common/autotest_common.sh@936 -- # '[' -z 75876 ']' 00:14:20.949 11:01:49 -- common/autotest_common.sh@940 -- # kill -0 75876 00:14:20.949 11:01:49 -- common/autotest_common.sh@941 -- # uname 00:14:20.949 11:01:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.949 11:01:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75876 00:14:20.949 killing process with pid 75876 00:14:20.949 11:01:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:20.949 11:01:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:20.949 11:01:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75876' 00:14:20.949 11:01:49 -- common/autotest_common.sh@955 -- # kill 75876 00:14:20.949 11:01:49 -- common/autotest_common.sh@960 -- # wait 75876 00:14:21.516 00:14:21.516 real 0m2.723s 00:14:21.516 user 0m3.156s 00:14:21.516 sys 0m0.680s 00:14:21.516 11:01:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:21.516 ************************************ 00:14:21.516 END TEST locking_app_on_locked_coremask 00:14:21.516 ************************************ 00:14:21.516 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:14:21.516 11:01:49 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:14:21.516 11:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:21.516 11:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.516 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:14:21.516 ************************************ 00:14:21.516 START TEST locking_overlapped_coremask 00:14:21.516 ************************************ 00:14:21.516 11:01:49 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:14:21.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.516 11:01:49 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75965 00:14:21.516 11:01:49 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:14:21.516 11:01:49 -- event/cpu_locks.sh@133 -- # waitforlisten 75965 /var/tmp/spdk.sock 00:14:21.516 11:01:49 -- common/autotest_common.sh@817 -- # '[' -z 75965 ']' 00:14:21.516 11:01:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.516 11:01:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:21.516 11:01:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.516 11:01:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:21.516 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:14:21.516 [2024-04-18 11:01:50.030567] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:21.516 [2024-04-18 11:01:50.030866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75965 ] 00:14:21.774 [2024-04-18 11:01:50.162108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.774 [2024-04-18 11:01:50.265151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.774 [2024-04-18 11:01:50.265265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.774 [2024-04-18 11:01:50.265269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.708 11:01:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:22.708 11:01:51 -- common/autotest_common.sh@850 -- # return 0 00:14:22.708 11:01:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:14:22.708 11:01:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75995 00:14:22.708 11:01:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75995 /var/tmp/spdk2.sock 00:14:22.708 11:01:51 -- common/autotest_common.sh@638 -- # local es=0 00:14:22.708 11:01:51 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 75995 /var/tmp/spdk2.sock 00:14:22.708 11:01:51 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:14:22.708 11:01:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.708 11:01:51 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:14:22.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:22.708 11:01:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.708 11:01:51 -- common/autotest_common.sh@641 -- # waitforlisten 75995 /var/tmp/spdk2.sock 00:14:22.708 11:01:51 -- common/autotest_common.sh@817 -- # '[' -z 75995 ']' 00:14:22.708 11:01:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:22.708 11:01:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:22.708 11:01:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:22.708 11:01:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:22.708 11:01:51 -- common/autotest_common.sh@10 -- # set +x 00:14:22.708 [2024-04-18 11:01:51.119972] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:22.708 [2024-04-18 11:01:51.120084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75995 ] 00:14:22.708 [2024-04-18 11:01:51.264014] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75965 has claimed it. 00:14:22.708 [2024-04-18 11:01:51.264113] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:23.274 ERROR: process (pid: 75995) is no longer running 00:14:23.274 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (75995) - No such process 00:14:23.274 11:01:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:23.274 11:01:51 -- common/autotest_common.sh@850 -- # return 1 00:14:23.274 11:01:51 -- common/autotest_common.sh@641 -- # es=1 00:14:23.274 11:01:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:23.274 11:01:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:23.274 11:01:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:23.274 11:01:51 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:14:23.274 11:01:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:23.274 11:01:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:23.274 11:01:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:23.274 11:01:51 -- event/cpu_locks.sh@141 -- # killprocess 75965 00:14:23.274 11:01:51 -- common/autotest_common.sh@936 -- # '[' -z 75965 ']' 00:14:23.274 11:01:51 -- common/autotest_common.sh@940 -- # kill -0 75965 00:14:23.274 11:01:51 -- common/autotest_common.sh@941 -- # uname 00:14:23.274 11:01:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.274 11:01:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75965 00:14:23.274 11:01:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:23.274 11:01:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:23.274 11:01:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75965' 00:14:23.274 killing process with pid 75965 00:14:23.274 11:01:51 -- common/autotest_common.sh@955 -- # kill 75965 00:14:23.274 11:01:51 -- common/autotest_common.sh@960 -- # wait 75965 00:14:23.842 00:14:23.842 real 0m2.327s 00:14:23.842 user 0m6.585s 00:14:23.842 sys 0m0.456s 00:14:23.842 11:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.842 11:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:23.842 ************************************ 00:14:23.842 END TEST locking_overlapped_coremask 00:14:23.842 ************************************ 00:14:23.842 11:01:52 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:14:23.842 11:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:23.842 11:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.842 11:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:23.842 ************************************ 00:14:23.842 START TEST locking_overlapped_coremask_via_rpc 00:14:23.842 ************************************ 00:14:23.842 11:01:52 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:14:23.842 11:01:52 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76045 00:14:23.842 11:01:52 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:14:23.842 11:01:52 -- event/cpu_locks.sh@149 -- # waitforlisten 76045 /var/tmp/spdk.sock 00:14:23.842 11:01:52 -- common/autotest_common.sh@817 -- # '[' -z 76045 ']' 00:14:23.842 11:01:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.842 11:01:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:23.842 11:01:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.842 11:01:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:23.842 11:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:24.100 [2024-04-18 11:01:52.484752] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:24.100 [2024-04-18 11:01:52.484868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76045 ] 00:14:24.100 [2024-04-18 11:01:52.627430] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:24.100 [2024-04-18 11:01:52.627488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.100 [2024-04-18 11:01:52.728759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.100 [2024-04-18 11:01:52.728842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.100 [2024-04-18 11:01:52.728852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.047 11:01:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:25.047 11:01:53 -- common/autotest_common.sh@850 -- # return 0 00:14:25.047 11:01:53 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:14:25.047 11:01:53 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76075 00:14:25.047 11:01:53 -- event/cpu_locks.sh@153 -- # waitforlisten 76075 /var/tmp/spdk2.sock 00:14:25.047 11:01:53 -- common/autotest_common.sh@817 -- # '[' -z 76075 ']' 00:14:25.047 11:01:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:25.047 11:01:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:25.047 11:01:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:25.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:25.047 11:01:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:25.047 11:01:53 -- common/autotest_common.sh@10 -- # set +x 00:14:25.047 [2024-04-18 11:01:53.523164] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:25.047 [2024-04-18 11:01:53.523467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76075 ] 00:14:25.047 [2024-04-18 11:01:53.666736] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:25.047 [2024-04-18 11:01:53.666791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.305 [2024-04-18 11:01:53.866410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.305 [2024-04-18 11:01:53.866553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.305 [2024-04-18 11:01:53.866554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:25.871 11:01:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:25.871 11:01:54 -- common/autotest_common.sh@850 -- # return 0 00:14:25.871 11:01:54 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:14:25.871 11:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.871 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:26.129 11:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.129 11:01:54 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:26.129 11:01:54 -- common/autotest_common.sh@638 -- # local es=0 00:14:26.129 11:01:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:26.129 11:01:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:14:26.129 11:01:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.129 11:01:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:14:26.129 11:01:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.129 11:01:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:26.129 11:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.129 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:26.129 [2024-04-18 11:01:54.526167] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76045 has claimed it. 00:14:26.129 2024/04/18 11:01:54 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:14:26.129 request: 00:14:26.129 { 00:14:26.129 "method": "framework_enable_cpumask_locks", 00:14:26.129 "params": {} 00:14:26.129 } 00:14:26.129 Got JSON-RPC error response 00:14:26.129 GoRPCClient: error on JSON-RPC call 00:14:26.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.129 11:01:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:26.129 11:01:54 -- common/autotest_common.sh@641 -- # es=1 00:14:26.129 11:01:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:26.129 11:01:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:26.129 11:01:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:26.129 11:01:54 -- event/cpu_locks.sh@158 -- # waitforlisten 76045 /var/tmp/spdk.sock 00:14:26.129 11:01:54 -- common/autotest_common.sh@817 -- # '[' -z 76045 ']' 00:14:26.129 11:01:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.129 11:01:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:26.129 11:01:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.129 11:01:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:26.129 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:26.387 11:01:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:26.387 11:01:54 -- common/autotest_common.sh@850 -- # return 0 00:14:26.387 11:01:54 -- event/cpu_locks.sh@159 -- # waitforlisten 76075 /var/tmp/spdk2.sock 00:14:26.387 11:01:54 -- common/autotest_common.sh@817 -- # '[' -z 76075 ']' 00:14:26.387 11:01:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:26.387 11:01:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:26.387 11:01:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:26.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:26.387 11:01:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:26.387 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:26.645 11:01:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:26.645 11:01:55 -- common/autotest_common.sh@850 -- # return 0 00:14:26.645 11:01:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:14:26.645 11:01:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:26.645 11:01:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:26.645 11:01:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:26.645 00:14:26.645 real 0m2.669s 00:14:26.645 user 0m1.381s 00:14:26.645 sys 0m0.226s 00:14:26.645 11:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:26.645 ************************************ 00:14:26.645 END TEST locking_overlapped_coremask_via_rpc 00:14:26.645 ************************************ 00:14:26.645 11:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:26.645 11:01:55 -- event/cpu_locks.sh@174 -- # cleanup 00:14:26.645 11:01:55 -- event/cpu_locks.sh@15 -- # [[ -z 76045 ]] 00:14:26.645 11:01:55 -- event/cpu_locks.sh@15 -- # killprocess 76045 00:14:26.645 11:01:55 -- common/autotest_common.sh@936 -- # '[' -z 76045 ']' 00:14:26.645 11:01:55 -- common/autotest_common.sh@940 -- # kill -0 76045 00:14:26.645 11:01:55 -- common/autotest_common.sh@941 -- # uname 00:14:26.645 11:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:26.645 11:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76045 00:14:26.645 killing process with pid 76045 00:14:26.645 11:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:26.645 11:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:26.645 11:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76045' 00:14:26.645 11:01:55 -- common/autotest_common.sh@955 -- # kill 76045 00:14:26.645 11:01:55 -- common/autotest_common.sh@960 -- # wait 76045 00:14:26.905 11:01:55 -- event/cpu_locks.sh@16 -- # [[ -z 76075 ]] 00:14:26.906 11:01:55 -- event/cpu_locks.sh@16 -- # killprocess 76075 00:14:26.906 11:01:55 -- common/autotest_common.sh@936 -- # '[' -z 76075 ']' 00:14:26.906 11:01:55 -- common/autotest_common.sh@940 -- # kill -0 76075 00:14:26.906 11:01:55 -- common/autotest_common.sh@941 -- # uname 00:14:26.906 11:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:26.906 11:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76075 00:14:27.164 killing process with pid 76075 00:14:27.164 11:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:27.164 11:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:27.164 11:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76075' 00:14:27.164 11:01:55 -- common/autotest_common.sh@955 -- # kill 76075 00:14:27.164 11:01:55 -- common/autotest_common.sh@960 -- # wait 76075 00:14:27.422 11:01:55 -- event/cpu_locks.sh@18 -- # rm -f 00:14:27.422 Process with pid 76045 is not found 00:14:27.422 Process with pid 76075 is not found 00:14:27.422 11:01:55 -- event/cpu_locks.sh@1 -- # cleanup 00:14:27.422 11:01:55 -- event/cpu_locks.sh@15 -- # [[ -z 76045 ]] 00:14:27.422 11:01:55 -- event/cpu_locks.sh@15 -- # killprocess 76045 00:14:27.422 11:01:55 -- common/autotest_common.sh@936 -- # '[' -z 76045 ']' 00:14:27.422 11:01:55 -- common/autotest_common.sh@940 -- # kill -0 76045 00:14:27.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76045) - No such process 00:14:27.422 11:01:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76045 is not found' 00:14:27.422 11:01:55 -- event/cpu_locks.sh@16 -- # [[ -z 76075 ]] 00:14:27.422 11:01:55 -- event/cpu_locks.sh@16 -- # killprocess 76075 00:14:27.422 11:01:55 -- common/autotest_common.sh@936 -- # '[' -z 76075 ']' 00:14:27.422 11:01:55 -- common/autotest_common.sh@940 -- # kill -0 76075 00:14:27.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76075) - No such process 00:14:27.422 11:01:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76075 is not found' 00:14:27.422 11:01:55 -- event/cpu_locks.sh@18 -- # rm -f 00:14:27.422 00:14:27.422 real 0m21.170s 00:14:27.422 user 0m36.497s 00:14:27.422 sys 0m5.751s 00:14:27.422 11:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:27.422 ************************************ 00:14:27.422 END TEST cpu_locks 00:14:27.422 ************************************ 00:14:27.422 11:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:27.422 ************************************ 00:14:27.422 END TEST event 00:14:27.422 ************************************ 00:14:27.422 00:14:27.422 real 0m50.751s 00:14:27.422 user 1m38.085s 00:14:27.422 sys 0m9.981s 00:14:27.422 11:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:27.422 11:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:27.422 11:01:56 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:14:27.422 11:01:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:27.422 11:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.422 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:14:27.680 ************************************ 00:14:27.680 START TEST thread 00:14:27.680 ************************************ 00:14:27.680 11:01:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:14:27.680 * Looking for test storage... 00:14:27.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:14:27.680 11:01:56 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:14:27.680 11:01:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:14:27.680 11:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.680 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:14:27.680 ************************************ 00:14:27.680 START TEST thread_poller_perf 00:14:27.680 ************************************ 00:14:27.680 11:01:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:14:27.680 [2024-04-18 11:01:56.250753] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:27.680 [2024-04-18 11:01:56.250854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76236 ] 00:14:27.938 [2024-04-18 11:01:56.389787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.938 [2024-04-18 11:01:56.485474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.938 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:14:29.374 ====================================== 00:14:29.374 busy:2208515225 (cyc) 00:14:29.374 total_run_count: 297000 00:14:29.374 tsc_hz: 2200000000 (cyc) 00:14:29.374 ====================================== 00:14:29.374 poller_cost: 7436 (cyc), 3380 (nsec) 00:14:29.374 00:14:29.374 real 0m1.332s 00:14:29.374 user 0m1.172s 00:14:29.374 sys 0m0.052s 00:14:29.374 11:01:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:29.374 11:01:57 -- common/autotest_common.sh@10 -- # set +x 00:14:29.374 ************************************ 00:14:29.374 END TEST thread_poller_perf 00:14:29.374 ************************************ 00:14:29.374 11:01:57 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:29.374 11:01:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:14:29.374 11:01:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.374 11:01:57 -- common/autotest_common.sh@10 -- # set +x 00:14:29.374 ************************************ 00:14:29.374 START TEST thread_poller_perf 00:14:29.374 ************************************ 00:14:29.374 11:01:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:29.374 [2024-04-18 11:01:57.688589] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:29.374 [2024-04-18 11:01:57.688833] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76276 ] 00:14:29.374 [2024-04-18 11:01:57.828624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.374 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:14:29.374 [2024-04-18 11:01:57.922577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.762 ====================================== 00:14:30.762 busy:2202009162 (cyc) 00:14:30.762 total_run_count: 4204000 00:14:30.762 tsc_hz: 2200000000 (cyc) 00:14:30.762 ====================================== 00:14:30.762 poller_cost: 523 (cyc), 237 (nsec) 00:14:30.762 00:14:30.762 real 0m1.323s 00:14:30.762 user 0m1.160s 00:14:30.762 sys 0m0.055s 00:14:30.763 ************************************ 00:14:30.763 11:01:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.763 11:01:58 -- common/autotest_common.sh@10 -- # set +x 00:14:30.763 END TEST thread_poller_perf 00:14:30.763 ************************************ 00:14:30.763 11:01:59 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:14:30.763 ************************************ 00:14:30.763 END TEST thread 00:14:30.763 ************************************ 00:14:30.763 00:14:30.763 real 0m2.955s 00:14:30.763 user 0m2.439s 00:14:30.763 sys 0m0.274s 00:14:30.763 11:01:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.763 11:01:59 -- common/autotest_common.sh@10 -- # set +x 00:14:30.763 11:01:59 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:14:30.763 11:01:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.763 11:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.763 11:01:59 -- common/autotest_common.sh@10 -- # set +x 00:14:30.763 ************************************ 00:14:30.763 START TEST accel 00:14:30.763 ************************************ 00:14:30.763 11:01:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:14:30.763 * Looking for test storage... 00:14:30.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:14:30.763 11:01:59 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:14:30.763 11:01:59 -- accel/accel.sh@82 -- # get_expected_opcs 00:14:30.763 11:01:59 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:30.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.763 11:01:59 -- accel/accel.sh@62 -- # spdk_tgt_pid=76356 00:14:30.763 11:01:59 -- accel/accel.sh@63 -- # waitforlisten 76356 00:14:30.763 11:01:59 -- common/autotest_common.sh@817 -- # '[' -z 76356 ']' 00:14:30.763 11:01:59 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:14:30.763 11:01:59 -- accel/accel.sh@61 -- # build_accel_config 00:14:30.763 11:01:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.763 11:01:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:30.763 11:01:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:30.763 11:01:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.763 11:01:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:30.763 11:01:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:30.763 11:01:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:30.763 11:01:59 -- common/autotest_common.sh@10 -- # set +x 00:14:30.763 11:01:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:30.763 11:01:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:30.763 11:01:59 -- accel/accel.sh@40 -- # local IFS=, 00:14:30.763 11:01:59 -- accel/accel.sh@41 -- # jq -r . 00:14:30.763 [2024-04-18 11:01:59.279358] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:30.763 [2024-04-18 11:01:59.279686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76356 ] 00:14:31.021 [2024-04-18 11:01:59.415720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.021 [2024-04-18 11:01:59.510951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.599 11:02:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:31.599 11:02:00 -- common/autotest_common.sh@850 -- # return 0 00:14:31.599 11:02:00 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:14:31.599 11:02:00 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:14:31.599 11:02:00 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:14:31.599 11:02:00 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:14:31.599 11:02:00 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:14:31.599 11:02:00 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:14:31.599 11:02:00 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:14:31.599 11:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.599 11:02:00 -- common/autotest_common.sh@10 -- # set +x 00:14:31.863 11:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # IFS== 00:14:31.863 11:02:00 -- accel/accel.sh@72 -- # read -r opc module 00:14:31.863 11:02:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:14:31.863 11:02:00 -- accel/accel.sh@75 -- # killprocess 76356 00:14:31.863 11:02:00 -- common/autotest_common.sh@936 -- # '[' -z 76356 ']' 00:14:31.863 11:02:00 -- common/autotest_common.sh@940 -- # kill -0 76356 00:14:31.863 11:02:00 -- common/autotest_common.sh@941 -- # uname 00:14:31.863 11:02:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.863 11:02:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76356 00:14:31.863 11:02:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.863 11:02:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.863 11:02:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76356' 00:14:31.863 killing process with pid 76356 00:14:31.863 11:02:00 -- common/autotest_common.sh@955 -- # kill 76356 00:14:31.863 11:02:00 -- common/autotest_common.sh@960 -- # wait 76356 00:14:32.121 11:02:00 -- accel/accel.sh@76 -- # trap - ERR 00:14:32.121 11:02:00 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:14:32.121 11:02:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:32.121 11:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.121 11:02:00 -- common/autotest_common.sh@10 -- # set +x 00:14:32.121 11:02:00 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:14:32.121 11:02:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:14:32.121 11:02:00 -- accel/accel.sh@12 -- # build_accel_config 00:14:32.121 11:02:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:32.121 11:02:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:32.121 11:02:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:32.121 11:02:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:32.121 11:02:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:32.121 11:02:00 -- accel/accel.sh@40 -- # local IFS=, 00:14:32.121 11:02:00 -- accel/accel.sh@41 -- # jq -r . 00:14:32.380 11:02:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:32.380 11:02:00 -- common/autotest_common.sh@10 -- # set +x 00:14:32.380 11:02:00 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:14:32.380 11:02:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:32.380 11:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.380 11:02:00 -- common/autotest_common.sh@10 -- # set +x 00:14:32.380 ************************************ 00:14:32.380 START TEST accel_missing_filename 00:14:32.380 ************************************ 00:14:32.380 11:02:00 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:14:32.380 11:02:00 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.380 11:02:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:14:32.380 11:02:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:14:32.380 11:02:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.380 11:02:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:14:32.380 11:02:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.380 11:02:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:14:32.380 11:02:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:14:32.380 11:02:00 -- accel/accel.sh@12 -- # build_accel_config 00:14:32.380 11:02:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:32.380 11:02:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:32.380 11:02:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:32.380 11:02:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:32.380 11:02:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:32.380 11:02:00 -- accel/accel.sh@40 -- # local IFS=, 00:14:32.380 11:02:00 -- accel/accel.sh@41 -- # jq -r . 00:14:32.380 [2024-04-18 11:02:00.906885] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:32.380 [2024-04-18 11:02:00.906972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76434 ] 00:14:32.638 [2024-04-18 11:02:01.042156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.638 [2024-04-18 11:02:01.135868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.638 [2024-04-18 11:02:01.189877] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:32.638 [2024-04-18 11:02:01.263821] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:14:32.896 A filename is required. 00:14:32.896 11:02:01 -- common/autotest_common.sh@641 -- # es=234 00:14:32.896 11:02:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:32.896 ************************************ 00:14:32.896 END TEST accel_missing_filename 00:14:32.896 ************************************ 00:14:32.896 11:02:01 -- common/autotest_common.sh@650 -- # es=106 00:14:32.896 11:02:01 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:32.896 11:02:01 -- common/autotest_common.sh@658 -- # es=1 00:14:32.896 11:02:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:32.896 00:14:32.896 real 0m0.454s 00:14:32.896 user 0m0.290s 00:14:32.896 sys 0m0.104s 00:14:32.896 11:02:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:32.896 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:14:32.896 11:02:01 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:32.896 11:02:01 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:14:32.896 11:02:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.896 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:14:32.896 ************************************ 00:14:32.896 START TEST accel_compress_verify 00:14:32.896 ************************************ 00:14:32.897 11:02:01 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:32.897 11:02:01 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.897 11:02:01 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:32.897 11:02:01 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:14:32.897 11:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.897 11:02:01 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:14:32.897 11:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.897 11:02:01 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:32.897 11:02:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:32.897 11:02:01 -- accel/accel.sh@12 -- # build_accel_config 00:14:32.897 11:02:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:32.897 11:02:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:32.897 11:02:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:32.897 11:02:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:32.897 11:02:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:32.897 11:02:01 -- accel/accel.sh@40 -- # local IFS=, 00:14:32.897 11:02:01 -- accel/accel.sh@41 -- # jq -r . 00:14:32.897 [2024-04-18 11:02:01.475569] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:32.897 [2024-04-18 11:02:01.475656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76463 ] 00:14:33.200 [2024-04-18 11:02:01.606177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.201 [2024-04-18 11:02:01.713486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.201 [2024-04-18 11:02:01.769319] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.503 [2024-04-18 11:02:01.843741] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:14:33.503 00:14:33.503 Compression does not support the verify option, aborting. 00:14:33.503 11:02:01 -- common/autotest_common.sh@641 -- # es=161 00:14:33.503 11:02:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.503 11:02:01 -- common/autotest_common.sh@650 -- # es=33 00:14:33.503 ************************************ 00:14:33.503 END TEST accel_compress_verify 00:14:33.503 ************************************ 00:14:33.503 11:02:01 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:33.503 11:02:01 -- common/autotest_common.sh@658 -- # es=1 00:14:33.503 11:02:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.503 00:14:33.503 real 0m0.468s 00:14:33.503 user 0m0.294s 00:14:33.503 sys 0m0.115s 00:14:33.503 11:02:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.503 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:14:33.503 11:02:01 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:14:33.503 11:02:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:33.503 11:02:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.503 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:14:33.503 ************************************ 00:14:33.503 START TEST accel_wrong_workload 00:14:33.503 ************************************ 00:14:33.503 11:02:02 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:14:33.503 11:02:02 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.503 11:02:02 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:14:33.503 11:02:02 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:14:33.503 11:02:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.503 11:02:02 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:14:33.503 11:02:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.503 11:02:02 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:14:33.503 11:02:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:14:33.503 11:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:14:33.503 11:02:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:33.503 11:02:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:33.503 11:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:33.504 11:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:33.504 11:02:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:33.504 11:02:02 -- accel/accel.sh@40 -- # local IFS=, 00:14:33.504 11:02:02 -- accel/accel.sh@41 -- # jq -r . 00:14:33.504 Unsupported workload type: foobar 00:14:33.504 [2024-04-18 11:02:02.057083] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:14:33.504 accel_perf options: 00:14:33.504 [-h help message] 00:14:33.504 [-q queue depth per core] 00:14:33.504 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:14:33.504 [-T number of threads per core 00:14:33.504 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:14:33.504 [-t time in seconds] 00:14:33.504 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:14:33.504 [ dif_verify, , dif_generate, dif_generate_copy 00:14:33.504 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:14:33.504 [-l for compress/decompress workloads, name of uncompressed input file 00:14:33.504 [-S for crc32c workload, use this seed value (default 0) 00:14:33.504 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:14:33.504 [-f for fill workload, use this BYTE value (default 255) 00:14:33.504 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:14:33.504 [-y verify result if this switch is on] 00:14:33.504 [-a tasks to allocate per core (default: same value as -q)] 00:14:33.504 Can be used to spread operations across a wider range of memory. 00:14:33.504 11:02:02 -- common/autotest_common.sh@641 -- # es=1 00:14:33.504 11:02:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.504 11:02:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:33.504 ************************************ 00:14:33.504 END TEST accel_wrong_workload 00:14:33.504 ************************************ 00:14:33.504 11:02:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.504 00:14:33.504 real 0m0.028s 00:14:33.504 user 0m0.012s 00:14:33.504 sys 0m0.016s 00:14:33.504 11:02:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.504 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.504 11:02:02 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:14:33.504 11:02:02 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:14:33.504 11:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.504 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.763 ************************************ 00:14:33.763 START TEST accel_negative_buffers 00:14:33.763 ************************************ 00:14:33.763 11:02:02 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:14:33.763 11:02:02 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.763 11:02:02 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:14:33.763 11:02:02 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:14:33.763 11:02:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.763 11:02:02 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:14:33.763 11:02:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.763 11:02:02 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:14:33.763 11:02:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:14:33.763 11:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:14:33.763 11:02:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:33.763 11:02:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:33.763 11:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:33.763 11:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:33.763 11:02:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:33.763 11:02:02 -- accel/accel.sh@40 -- # local IFS=, 00:14:33.763 11:02:02 -- accel/accel.sh@41 -- # jq -r . 00:14:33.763 -x option must be non-negative. 00:14:33.763 [2024-04-18 11:02:02.196870] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:14:33.763 accel_perf options: 00:14:33.763 [-h help message] 00:14:33.763 [-q queue depth per core] 00:14:33.763 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:14:33.763 [-T number of threads per core 00:14:33.763 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:14:33.763 [-t time in seconds] 00:14:33.763 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:14:33.763 [ dif_verify, , dif_generate, dif_generate_copy 00:14:33.763 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:14:33.763 [-l for compress/decompress workloads, name of uncompressed input file 00:14:33.763 [-S for crc32c workload, use this seed value (default 0) 00:14:33.763 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:14:33.763 [-f for fill workload, use this BYTE value (default 255) 00:14:33.763 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:14:33.763 [-y verify result if this switch is on] 00:14:33.763 [-a tasks to allocate per core (default: same value as -q)] 00:14:33.763 Can be used to spread operations across a wider range of memory. 00:14:33.763 11:02:02 -- common/autotest_common.sh@641 -- # es=1 00:14:33.763 11:02:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.763 11:02:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:33.763 11:02:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.763 00:14:33.763 real 0m0.029s 00:14:33.763 user 0m0.018s 00:14:33.763 sys 0m0.010s 00:14:33.763 11:02:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.763 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.763 ************************************ 00:14:33.763 END TEST accel_negative_buffers 00:14:33.763 ************************************ 00:14:33.763 11:02:02 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:14:33.763 11:02:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:33.763 11:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.763 11:02:02 -- common/autotest_common.sh@10 -- # set +x 00:14:33.763 ************************************ 00:14:33.763 START TEST accel_crc32c 00:14:33.763 ************************************ 00:14:33.763 11:02:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:14:33.763 11:02:02 -- accel/accel.sh@16 -- # local accel_opc 00:14:33.763 11:02:02 -- accel/accel.sh@17 -- # local accel_module 00:14:33.763 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:33.763 11:02:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:14:33.763 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:33.763 11:02:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:14:33.763 11:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:14:33.763 11:02:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:33.763 11:02:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:33.763 11:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:33.763 11:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:33.763 11:02:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:33.763 11:02:02 -- accel/accel.sh@40 -- # local IFS=, 00:14:33.763 11:02:02 -- accel/accel.sh@41 -- # jq -r . 00:14:33.763 [2024-04-18 11:02:02.347886] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:33.763 [2024-04-18 11:02:02.347975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76539 ] 00:14:34.022 [2024-04-18 11:02:02.486220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.022 [2024-04-18 11:02:02.580291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.022 11:02:02 -- accel/accel.sh@20 -- # val= 00:14:34.022 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val= 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=0x1 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val= 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val= 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=crc32c 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=32 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val= 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=software 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@22 -- # accel_module=software 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=32 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=32 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=1 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val=Yes 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val= 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:34.023 11:02:02 -- accel/accel.sh@20 -- # val= 00:14:34.023 11:02:02 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # IFS=: 00:14:34.023 11:02:02 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@20 -- # val= 00:14:35.399 11:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # IFS=: 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@20 -- # val= 00:14:35.399 11:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # IFS=: 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@20 -- # val= 00:14:35.399 11:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # IFS=: 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@20 -- # val= 00:14:35.399 11:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # IFS=: 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@20 -- # val= 00:14:35.399 ************************************ 00:14:35.399 END TEST accel_crc32c 00:14:35.399 ************************************ 00:14:35.399 11:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # IFS=: 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@20 -- # val= 00:14:35.399 11:02:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # IFS=: 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:35.399 11:02:03 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:35.399 11:02:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:35.399 00:14:35.399 real 0m1.470s 00:14:35.399 user 0m1.267s 00:14:35.399 sys 0m0.108s 00:14:35.399 11:02:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.399 11:02:03 -- common/autotest_common.sh@10 -- # set +x 00:14:35.399 11:02:03 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:14:35.399 11:02:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:35.399 11:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.399 11:02:03 -- common/autotest_common.sh@10 -- # set +x 00:14:35.399 ************************************ 00:14:35.399 START TEST accel_crc32c_C2 00:14:35.399 ************************************ 00:14:35.399 11:02:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:14:35.399 11:02:03 -- accel/accel.sh@16 -- # local accel_opc 00:14:35.399 11:02:03 -- accel/accel.sh@17 -- # local accel_module 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # IFS=: 00:14:35.399 11:02:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:14:35.399 11:02:03 -- accel/accel.sh@19 -- # read -r var val 00:14:35.399 11:02:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:14:35.399 11:02:03 -- accel/accel.sh@12 -- # build_accel_config 00:14:35.399 11:02:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:35.399 11:02:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:35.399 11:02:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:35.399 11:02:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:35.399 11:02:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:35.399 11:02:03 -- accel/accel.sh@40 -- # local IFS=, 00:14:35.399 11:02:03 -- accel/accel.sh@41 -- # jq -r . 00:14:35.399 [2024-04-18 11:02:03.937643] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:35.399 [2024-04-18 11:02:03.937729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76579 ] 00:14:35.657 [2024-04-18 11:02:04.071665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.657 [2024-04-18 11:02:04.167628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.657 11:02:04 -- accel/accel.sh@20 -- # val= 00:14:35.657 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.657 11:02:04 -- accel/accel.sh@20 -- # val= 00:14:35.657 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.657 11:02:04 -- accel/accel.sh@20 -- # val=0x1 00:14:35.657 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.657 11:02:04 -- accel/accel.sh@20 -- # val= 00:14:35.657 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.657 11:02:04 -- accel/accel.sh@20 -- # val= 00:14:35.657 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.657 11:02:04 -- accel/accel.sh@20 -- # val=crc32c 00:14:35.657 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.657 11:02:04 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.657 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.657 11:02:04 -- accel/accel.sh@20 -- # val=0 00:14:35.657 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val= 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val=software 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@22 -- # accel_module=software 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val=32 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val=32 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val=1 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val=Yes 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val= 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:35.658 11:02:04 -- accel/accel.sh@20 -- # val= 00:14:35.658 11:02:04 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # IFS=: 00:14:35.658 11:02:04 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.035 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.035 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.035 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.035 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.035 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.035 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 ************************************ 00:14:37.035 END TEST accel_crc32c_C2 00:14:37.035 ************************************ 00:14:37.035 11:02:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:37.035 11:02:05 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:37.035 11:02:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:37.035 00:14:37.035 real 0m1.465s 00:14:37.035 user 0m1.264s 00:14:37.035 sys 0m0.107s 00:14:37.035 11:02:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.035 11:02:05 -- common/autotest_common.sh@10 -- # set +x 00:14:37.035 11:02:05 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:14:37.035 11:02:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:37.035 11:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.035 11:02:05 -- common/autotest_common.sh@10 -- # set +x 00:14:37.035 ************************************ 00:14:37.035 START TEST accel_copy 00:14:37.035 ************************************ 00:14:37.035 11:02:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:14:37.035 11:02:05 -- accel/accel.sh@16 -- # local accel_opc 00:14:37.035 11:02:05 -- accel/accel.sh@17 -- # local accel_module 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.035 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.035 11:02:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:14:37.035 11:02:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:14:37.035 11:02:05 -- accel/accel.sh@12 -- # build_accel_config 00:14:37.035 11:02:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:37.035 11:02:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:37.035 11:02:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:37.035 11:02:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:37.035 11:02:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:37.035 11:02:05 -- accel/accel.sh@40 -- # local IFS=, 00:14:37.035 11:02:05 -- accel/accel.sh@41 -- # jq -r . 00:14:37.035 [2024-04-18 11:02:05.515484] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:37.035 [2024-04-18 11:02:05.515580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76617 ] 00:14:37.035 [2024-04-18 11:02:05.651530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.295 [2024-04-18 11:02:05.747790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val=0x1 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val=copy 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@23 -- # accel_opc=copy 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val=software 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@22 -- # accel_module=software 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val=32 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val=32 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val=1 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val=Yes 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:37.295 11:02:05 -- accel/accel.sh@20 -- # val= 00:14:37.295 11:02:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # IFS=: 00:14:37.295 11:02:05 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:06 -- accel/accel.sh@20 -- # val= 00:14:38.671 11:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # IFS=: 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:06 -- accel/accel.sh@20 -- # val= 00:14:38.671 11:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # IFS=: 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:06 -- accel/accel.sh@20 -- # val= 00:14:38.671 11:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # IFS=: 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:06 -- accel/accel.sh@20 -- # val= 00:14:38.671 11:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # IFS=: 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:06 -- accel/accel.sh@20 -- # val= 00:14:38.671 11:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # IFS=: 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:06 -- accel/accel.sh@20 -- # val= 00:14:38.671 11:02:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # IFS=: 00:14:38.671 11:02:06 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:38.671 11:02:06 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:14:38.671 11:02:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:38.671 00:14:38.671 real 0m1.466s 00:14:38.671 user 0m1.254s 00:14:38.671 sys 0m0.114s 00:14:38.671 11:02:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.671 ************************************ 00:14:38.671 END TEST accel_copy 00:14:38.671 ************************************ 00:14:38.671 11:02:06 -- common/autotest_common.sh@10 -- # set +x 00:14:38.671 11:02:06 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:38.671 11:02:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:38.671 11:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.671 11:02:06 -- common/autotest_common.sh@10 -- # set +x 00:14:38.671 ************************************ 00:14:38.671 START TEST accel_fill 00:14:38.671 ************************************ 00:14:38.671 11:02:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:38.671 11:02:07 -- accel/accel.sh@16 -- # local accel_opc 00:14:38.671 11:02:07 -- accel/accel.sh@17 -- # local accel_module 00:14:38.671 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.671 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.671 11:02:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:38.671 11:02:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:38.671 11:02:07 -- accel/accel.sh@12 -- # build_accel_config 00:14:38.671 11:02:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:38.671 11:02:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:38.671 11:02:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:38.671 11:02:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:38.671 11:02:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:38.671 11:02:07 -- accel/accel.sh@40 -- # local IFS=, 00:14:38.671 11:02:07 -- accel/accel.sh@41 -- # jq -r . 00:14:38.671 [2024-04-18 11:02:07.100290] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:38.671 [2024-04-18 11:02:07.100393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76656 ] 00:14:38.671 [2024-04-18 11:02:07.240868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.929 [2024-04-18 11:02:07.334697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val= 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val= 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val=0x1 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val= 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val= 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val=fill 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@23 -- # accel_opc=fill 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val=0x80 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val= 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.929 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.929 11:02:07 -- accel/accel.sh@20 -- # val=software 00:14:38.929 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@22 -- # accel_module=software 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.930 11:02:07 -- accel/accel.sh@20 -- # val=64 00:14:38.930 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.930 11:02:07 -- accel/accel.sh@20 -- # val=64 00:14:38.930 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.930 11:02:07 -- accel/accel.sh@20 -- # val=1 00:14:38.930 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.930 11:02:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:38.930 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.930 11:02:07 -- accel/accel.sh@20 -- # val=Yes 00:14:38.930 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.930 11:02:07 -- accel/accel.sh@20 -- # val= 00:14:38.930 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:38.930 11:02:07 -- accel/accel.sh@20 -- # val= 00:14:38.930 11:02:07 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # IFS=: 00:14:38.930 11:02:07 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.304 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.304 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.304 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.304 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 ************************************ 00:14:40.304 END TEST accel_fill 00:14:40.304 ************************************ 00:14:40.304 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.304 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.304 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 11:02:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:40.304 11:02:08 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:14:40.304 11:02:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:40.304 00:14:40.304 real 0m1.472s 00:14:40.304 user 0m1.273s 00:14:40.304 sys 0m0.104s 00:14:40.304 11:02:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.304 11:02:08 -- common/autotest_common.sh@10 -- # set +x 00:14:40.304 11:02:08 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:14:40.304 11:02:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:40.304 11:02:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.304 11:02:08 -- common/autotest_common.sh@10 -- # set +x 00:14:40.304 ************************************ 00:14:40.304 START TEST accel_copy_crc32c 00:14:40.304 ************************************ 00:14:40.304 11:02:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:14:40.304 11:02:08 -- accel/accel.sh@16 -- # local accel_opc 00:14:40.304 11:02:08 -- accel/accel.sh@17 -- # local accel_module 00:14:40.304 11:02:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.304 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.304 11:02:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:14:40.304 11:02:08 -- accel/accel.sh@12 -- # build_accel_config 00:14:40.304 11:02:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:40.304 11:02:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:40.304 11:02:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:40.304 11:02:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:40.304 11:02:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:40.304 11:02:08 -- accel/accel.sh@40 -- # local IFS=, 00:14:40.304 11:02:08 -- accel/accel.sh@41 -- # jq -r . 00:14:40.304 [2024-04-18 11:02:08.679998] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:40.304 [2024-04-18 11:02:08.680112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76695 ] 00:14:40.304 [2024-04-18 11:02:08.816098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.304 [2024-04-18 11:02:08.910887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=0x1 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=0 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=software 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@22 -- # accel_module=software 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=32 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=32 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=1 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val=Yes 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.563 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.563 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:40.563 11:02:08 -- accel/accel.sh@20 -- # val= 00:14:40.564 11:02:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.564 11:02:08 -- accel/accel.sh@19 -- # IFS=: 00:14:40.564 11:02:08 -- accel/accel.sh@19 -- # read -r var val 00:14:41.500 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:41.500 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:41.500 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:41.500 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:41.500 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:41.500 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:41.500 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:41.500 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:41.500 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:41.500 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:41.500 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:41.500 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:41.500 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:41.500 11:02:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:41.500 11:02:10 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:41.500 11:02:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:41.500 00:14:41.500 real 0m1.464s 00:14:41.500 user 0m1.263s 00:14:41.500 sys 0m0.106s 00:14:41.500 ************************************ 00:14:41.500 END TEST accel_copy_crc32c 00:14:41.500 ************************************ 00:14:41.500 11:02:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.500 11:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.759 11:02:10 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:14:41.759 11:02:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:41.759 11:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.759 11:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.759 ************************************ 00:14:41.759 START TEST accel_copy_crc32c_C2 00:14:41.759 ************************************ 00:14:41.759 11:02:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:14:41.759 11:02:10 -- accel/accel.sh@16 -- # local accel_opc 00:14:41.759 11:02:10 -- accel/accel.sh@17 -- # local accel_module 00:14:41.759 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:41.759 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:41.759 11:02:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:14:41.759 11:02:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:14:41.759 11:02:10 -- accel/accel.sh@12 -- # build_accel_config 00:14:41.759 11:02:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:41.759 11:02:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:41.759 11:02:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:41.759 11:02:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:41.759 11:02:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:41.759 11:02:10 -- accel/accel.sh@40 -- # local IFS=, 00:14:41.759 11:02:10 -- accel/accel.sh@41 -- # jq -r . 00:14:41.759 [2024-04-18 11:02:10.264147] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:41.759 [2024-04-18 11:02:10.264228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76734 ] 00:14:41.759 [2024-04-18 11:02:10.395314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.018 [2024-04-18 11:02:10.493970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val=0x1 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val=0 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.018 11:02:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:42.018 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.018 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val='8192 bytes' 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val=software 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@22 -- # accel_module=software 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val=32 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val=32 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val=1 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val=Yes 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:42.019 11:02:10 -- accel/accel.sh@20 -- # val= 00:14:42.019 11:02:10 -- accel/accel.sh@21 -- # case "$var" in 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # IFS=: 00:14:42.019 11:02:10 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@20 -- # val= 00:14:43.396 11:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # IFS=: 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@20 -- # val= 00:14:43.396 11:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # IFS=: 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@20 -- # val= 00:14:43.396 11:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # IFS=: 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@20 -- # val= 00:14:43.396 11:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # IFS=: 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@20 -- # val= 00:14:43.396 11:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # IFS=: 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@20 -- # val= 00:14:43.396 11:02:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # IFS=: 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:43.396 11:02:11 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:43.396 11:02:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:43.396 00:14:43.396 real 0m1.459s 00:14:43.396 user 0m1.253s 00:14:43.396 sys 0m0.114s 00:14:43.396 11:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:43.396 ************************************ 00:14:43.396 END TEST accel_copy_crc32c_C2 00:14:43.396 ************************************ 00:14:43.396 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:43.396 11:02:11 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:14:43.396 11:02:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:43.396 11:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:43.396 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:43.396 ************************************ 00:14:43.396 START TEST accel_dualcast 00:14:43.396 ************************************ 00:14:43.396 11:02:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:14:43.396 11:02:11 -- accel/accel.sh@16 -- # local accel_opc 00:14:43.396 11:02:11 -- accel/accel.sh@17 -- # local accel_module 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # IFS=: 00:14:43.396 11:02:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:14:43.396 11:02:11 -- accel/accel.sh@19 -- # read -r var val 00:14:43.396 11:02:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:14:43.396 11:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:14:43.396 11:02:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:43.396 11:02:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:43.396 11:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:43.396 11:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:43.396 11:02:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:43.396 11:02:11 -- accel/accel.sh@40 -- # local IFS=, 00:14:43.397 11:02:11 -- accel/accel.sh@41 -- # jq -r . 00:14:43.397 [2024-04-18 11:02:11.840011] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:43.397 [2024-04-18 11:02:11.840132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76775 ] 00:14:43.397 [2024-04-18 11:02:11.979388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.655 [2024-04-18 11:02:12.081727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val= 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val= 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val=0x1 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val= 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val= 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val=dualcast 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val= 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val=software 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@22 -- # accel_module=software 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val=32 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val=32 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val=1 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val=Yes 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val= 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:43.655 11:02:12 -- accel/accel.sh@20 -- # val= 00:14:43.655 11:02:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # IFS=: 00:14:43.655 11:02:12 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.030 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.030 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.030 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.030 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.030 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.030 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:45.030 11:02:13 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:14:45.030 11:02:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:45.030 00:14:45.030 real 0m1.483s 00:14:45.030 user 0m1.278s 00:14:45.030 sys 0m0.111s 00:14:45.030 11:02:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.030 11:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:45.030 ************************************ 00:14:45.030 END TEST accel_dualcast 00:14:45.030 ************************************ 00:14:45.030 11:02:13 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:14:45.030 11:02:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:45.030 11:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.030 11:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:45.030 ************************************ 00:14:45.030 START TEST accel_compare 00:14:45.030 ************************************ 00:14:45.030 11:02:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:14:45.030 11:02:13 -- accel/accel.sh@16 -- # local accel_opc 00:14:45.030 11:02:13 -- accel/accel.sh@17 -- # local accel_module 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.030 11:02:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:14:45.030 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.030 11:02:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:14:45.030 11:02:13 -- accel/accel.sh@12 -- # build_accel_config 00:14:45.030 11:02:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:45.030 11:02:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:45.030 11:02:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:45.030 11:02:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:45.030 11:02:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:45.030 11:02:13 -- accel/accel.sh@40 -- # local IFS=, 00:14:45.030 11:02:13 -- accel/accel.sh@41 -- # jq -r . 00:14:45.030 [2024-04-18 11:02:13.427257] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:45.030 [2024-04-18 11:02:13.427385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76811 ] 00:14:45.030 [2024-04-18 11:02:13.566201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.030 [2024-04-18 11:02:13.663109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val=0x1 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val=compare 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@23 -- # accel_opc=compare 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val=software 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@22 -- # accel_module=software 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val=32 00:14:45.287 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.287 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.287 11:02:13 -- accel/accel.sh@20 -- # val=32 00:14:45.288 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.288 11:02:13 -- accel/accel.sh@20 -- # val=1 00:14:45.288 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.288 11:02:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:45.288 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.288 11:02:13 -- accel/accel.sh@20 -- # val=Yes 00:14:45.288 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.288 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.288 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:45.288 11:02:13 -- accel/accel.sh@20 -- # val= 00:14:45.288 11:02:13 -- accel/accel.sh@21 -- # case "$var" in 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # IFS=: 00:14:45.288 11:02:13 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@20 -- # val= 00:14:46.659 11:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # IFS=: 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@20 -- # val= 00:14:46.659 11:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # IFS=: 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@20 -- # val= 00:14:46.659 11:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # IFS=: 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@20 -- # val= 00:14:46.659 11:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # IFS=: 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@20 -- # val= 00:14:46.659 11:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # IFS=: 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@20 -- # val= 00:14:46.659 11:02:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # IFS=: 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:46.659 ************************************ 00:14:46.659 END TEST accel_compare 00:14:46.659 ************************************ 00:14:46.659 11:02:14 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:14:46.659 11:02:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:46.659 00:14:46.659 real 0m1.475s 00:14:46.659 user 0m1.272s 00:14:46.659 sys 0m0.107s 00:14:46.659 11:02:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:46.659 11:02:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.659 11:02:14 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:14:46.659 11:02:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:46.659 11:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.659 11:02:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.659 ************************************ 00:14:46.659 START TEST accel_xor 00:14:46.659 ************************************ 00:14:46.659 11:02:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:14:46.659 11:02:14 -- accel/accel.sh@16 -- # local accel_opc 00:14:46.659 11:02:14 -- accel/accel.sh@17 -- # local accel_module 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # IFS=: 00:14:46.659 11:02:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:14:46.659 11:02:14 -- accel/accel.sh@19 -- # read -r var val 00:14:46.659 11:02:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:14:46.659 11:02:14 -- accel/accel.sh@12 -- # build_accel_config 00:14:46.659 11:02:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:46.659 11:02:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:46.659 11:02:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:46.659 11:02:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:46.659 11:02:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:46.659 11:02:14 -- accel/accel.sh@40 -- # local IFS=, 00:14:46.659 11:02:14 -- accel/accel.sh@41 -- # jq -r . 00:14:46.659 [2024-04-18 11:02:15.020194] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:46.659 [2024-04-18 11:02:15.020339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76855 ] 00:14:46.659 [2024-04-18 11:02:15.169410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.659 [2024-04-18 11:02:15.265261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val= 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val= 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=0x1 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val= 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val= 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=xor 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@23 -- # accel_opc=xor 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=2 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val= 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=software 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@22 -- # accel_module=software 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=32 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=32 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=1 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val=Yes 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val= 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:46.955 11:02:15 -- accel/accel.sh@20 -- # val= 00:14:46.955 11:02:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # IFS=: 00:14:46.955 11:02:15 -- accel/accel.sh@19 -- # read -r var val 00:14:47.892 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:47.892 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:47.892 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:47.892 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:47.892 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:47.892 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:47.892 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:47.892 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:47.892 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:47.892 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:47.892 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:47.892 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:47.892 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:47.892 11:02:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:47.892 11:02:16 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:47.892 11:02:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:47.892 00:14:47.892 real 0m1.491s 00:14:47.892 user 0m1.271s 00:14:47.892 sys 0m0.126s 00:14:47.892 11:02:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:47.892 ************************************ 00:14:47.892 END TEST accel_xor 00:14:47.892 ************************************ 00:14:47.892 11:02:16 -- common/autotest_common.sh@10 -- # set +x 00:14:47.892 11:02:16 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:14:47.892 11:02:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:47.892 11:02:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.892 11:02:16 -- common/autotest_common.sh@10 -- # set +x 00:14:48.150 ************************************ 00:14:48.150 START TEST accel_xor 00:14:48.150 ************************************ 00:14:48.150 11:02:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:14:48.150 11:02:16 -- accel/accel.sh@16 -- # local accel_opc 00:14:48.150 11:02:16 -- accel/accel.sh@17 -- # local accel_module 00:14:48.150 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.150 11:02:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:14:48.150 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.150 11:02:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:48.150 11:02:16 -- accel/accel.sh@12 -- # build_accel_config 00:14:48.150 11:02:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:48.150 11:02:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:48.150 11:02:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:48.150 11:02:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:48.150 11:02:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:48.150 11:02:16 -- accel/accel.sh@40 -- # local IFS=, 00:14:48.150 11:02:16 -- accel/accel.sh@41 -- # jq -r . 00:14:48.150 [2024-04-18 11:02:16.617218] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:48.150 [2024-04-18 11:02:16.617630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76890 ] 00:14:48.150 [2024-04-18 11:02:16.752879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.408 [2024-04-18 11:02:16.857747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=0x1 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=xor 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@23 -- # accel_opc=xor 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=3 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=software 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@22 -- # accel_module=software 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=32 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=32 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=1 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val=Yes 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:48.408 11:02:16 -- accel/accel.sh@20 -- # val= 00:14:48.408 11:02:16 -- accel/accel.sh@21 -- # case "$var" in 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # IFS=: 00:14:48.408 11:02:16 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:49.780 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:49.780 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:49.780 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:49.780 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:49.780 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:49.780 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:49.780 11:02:18 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:49.780 11:02:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:49.780 00:14:49.780 real 0m1.476s 00:14:49.780 user 0m1.268s 00:14:49.780 sys 0m0.113s 00:14:49.780 11:02:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:49.780 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.780 ************************************ 00:14:49.780 END TEST accel_xor 00:14:49.780 ************************************ 00:14:49.780 11:02:18 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:14:49.780 11:02:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:49.780 11:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:49.780 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:14:49.780 ************************************ 00:14:49.780 START TEST accel_dif_verify 00:14:49.780 ************************************ 00:14:49.780 11:02:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:14:49.780 11:02:18 -- accel/accel.sh@16 -- # local accel_opc 00:14:49.780 11:02:18 -- accel/accel.sh@17 -- # local accel_module 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:49.780 11:02:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:14:49.780 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:49.780 11:02:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:49.780 11:02:18 -- accel/accel.sh@12 -- # build_accel_config 00:14:49.780 11:02:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:49.780 11:02:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:49.780 11:02:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:49.780 11:02:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:49.780 11:02:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:49.780 11:02:18 -- accel/accel.sh@40 -- # local IFS=, 00:14:49.780 11:02:18 -- accel/accel.sh@41 -- # jq -r . 00:14:49.780 [2024-04-18 11:02:18.207884] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:49.780 [2024-04-18 11:02:18.207972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76934 ] 00:14:49.780 [2024-04-18 11:02:18.348563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.092 [2024-04-18 11:02:18.450501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val=0x1 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val=dif_verify 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val='512 bytes' 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val='8 bytes' 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val=software 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@22 -- # accel_module=software 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val=32 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val=32 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val=1 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val=No 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:50.092 11:02:18 -- accel/accel.sh@20 -- # val= 00:14:50.092 11:02:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # IFS=: 00:14:50.092 11:02:18 -- accel/accel.sh@19 -- # read -r var val 00:14:51.037 11:02:19 -- accel/accel.sh@20 -- # val= 00:14:51.037 11:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # IFS=: 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # read -r var val 00:14:51.037 11:02:19 -- accel/accel.sh@20 -- # val= 00:14:51.037 11:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # IFS=: 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # read -r var val 00:14:51.037 11:02:19 -- accel/accel.sh@20 -- # val= 00:14:51.037 11:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # IFS=: 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # read -r var val 00:14:51.037 11:02:19 -- accel/accel.sh@20 -- # val= 00:14:51.037 11:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # IFS=: 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # read -r var val 00:14:51.037 11:02:19 -- accel/accel.sh@20 -- # val= 00:14:51.037 11:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # IFS=: 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # read -r var val 00:14:51.037 11:02:19 -- accel/accel.sh@20 -- # val= 00:14:51.037 11:02:19 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # IFS=: 00:14:51.037 11:02:19 -- accel/accel.sh@19 -- # read -r var val 00:14:51.037 11:02:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:51.037 11:02:19 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:14:51.037 11:02:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:51.037 00:14:51.037 real 0m1.484s 00:14:51.037 user 0m1.275s 00:14:51.037 sys 0m0.116s 00:14:51.037 11:02:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:51.037 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:14:51.037 ************************************ 00:14:51.037 END TEST accel_dif_verify 00:14:51.037 ************************************ 00:14:51.294 11:02:19 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:14:51.295 11:02:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:51.295 11:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.295 11:02:19 -- common/autotest_common.sh@10 -- # set +x 00:14:51.295 ************************************ 00:14:51.295 START TEST accel_dif_generate 00:14:51.295 ************************************ 00:14:51.295 11:02:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:14:51.295 11:02:19 -- accel/accel.sh@16 -- # local accel_opc 00:14:51.295 11:02:19 -- accel/accel.sh@17 -- # local accel_module 00:14:51.295 11:02:19 -- accel/accel.sh@19 -- # IFS=: 00:14:51.295 11:02:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:14:51.295 11:02:19 -- accel/accel.sh@19 -- # read -r var val 00:14:51.295 11:02:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:51.295 11:02:19 -- accel/accel.sh@12 -- # build_accel_config 00:14:51.295 11:02:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:51.295 11:02:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:51.295 11:02:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:51.295 11:02:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:51.295 11:02:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:51.295 11:02:19 -- accel/accel.sh@40 -- # local IFS=, 00:14:51.295 11:02:19 -- accel/accel.sh@41 -- # jq -r . 00:14:51.295 [2024-04-18 11:02:19.805078] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:51.295 [2024-04-18 11:02:19.805174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76967 ] 00:14:51.551 [2024-04-18 11:02:19.946092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.551 [2024-04-18 11:02:20.048357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val= 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val= 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val=0x1 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val= 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val= 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val=dif_generate 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val='512 bytes' 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val='8 bytes' 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val= 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val=software 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@22 -- # accel_module=software 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val=32 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val=32 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val=1 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val=No 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val= 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:51.551 11:02:20 -- accel/accel.sh@20 -- # val= 00:14:51.551 11:02:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # IFS=: 00:14:51.551 11:02:20 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:52.923 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:52.923 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:52.923 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:52.923 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:52.923 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:52.923 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:52.923 11:02:21 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:14:52.923 11:02:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:52.923 00:14:52.923 real 0m1.487s 00:14:52.923 user 0m1.278s 00:14:52.923 sys 0m0.115s 00:14:52.923 11:02:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.923 ************************************ 00:14:52.923 END TEST accel_dif_generate 00:14:52.923 ************************************ 00:14:52.923 11:02:21 -- common/autotest_common.sh@10 -- # set +x 00:14:52.923 11:02:21 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:14:52.923 11:02:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:52.923 11:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.923 11:02:21 -- common/autotest_common.sh@10 -- # set +x 00:14:52.923 ************************************ 00:14:52.923 START TEST accel_dif_generate_copy 00:14:52.923 ************************************ 00:14:52.923 11:02:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:14:52.923 11:02:21 -- accel/accel.sh@16 -- # local accel_opc 00:14:52.923 11:02:21 -- accel/accel.sh@17 -- # local accel_module 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:52.923 11:02:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:14:52.923 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:52.923 11:02:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:52.923 11:02:21 -- accel/accel.sh@12 -- # build_accel_config 00:14:52.923 11:02:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:52.923 11:02:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:52.923 11:02:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:52.923 11:02:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:52.923 11:02:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:52.923 11:02:21 -- accel/accel.sh@40 -- # local IFS=, 00:14:52.923 11:02:21 -- accel/accel.sh@41 -- # jq -r . 00:14:52.923 [2024-04-18 11:02:21.406403] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:52.923 [2024-04-18 11:02:21.406516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77011 ] 00:14:52.923 [2024-04-18 11:02:21.550891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.181 [2024-04-18 11:02:21.652083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val=0x1 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val=software 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@22 -- # accel_module=software 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val=32 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val=32 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val=1 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val=No 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:53.181 11:02:21 -- accel/accel.sh@20 -- # val= 00:14:53.181 11:02:21 -- accel/accel.sh@21 -- # case "$var" in 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # IFS=: 00:14:53.181 11:02:21 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@20 -- # val= 00:14:54.557 11:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # IFS=: 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@20 -- # val= 00:14:54.557 11:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # IFS=: 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@20 -- # val= 00:14:54.557 11:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # IFS=: 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@20 -- # val= 00:14:54.557 11:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # IFS=: 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@20 -- # val= 00:14:54.557 11:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # IFS=: 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@20 -- # val= 00:14:54.557 11:02:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # IFS=: 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:54.557 11:02:22 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:14:54.557 11:02:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:54.557 00:14:54.557 real 0m1.490s 00:14:54.557 user 0m1.276s 00:14:54.557 sys 0m0.118s 00:14:54.557 11:02:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:54.557 11:02:22 -- common/autotest_common.sh@10 -- # set +x 00:14:54.557 ************************************ 00:14:54.557 END TEST accel_dif_generate_copy 00:14:54.557 ************************************ 00:14:54.557 11:02:22 -- accel/accel.sh@115 -- # [[ y == y ]] 00:14:54.557 11:02:22 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:54.557 11:02:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:14:54.557 11:02:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.557 11:02:22 -- common/autotest_common.sh@10 -- # set +x 00:14:54.557 ************************************ 00:14:54.557 START TEST accel_comp 00:14:54.557 ************************************ 00:14:54.557 11:02:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:54.557 11:02:22 -- accel/accel.sh@16 -- # local accel_opc 00:14:54.557 11:02:22 -- accel/accel.sh@17 -- # local accel_module 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # IFS=: 00:14:54.557 11:02:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:54.557 11:02:22 -- accel/accel.sh@19 -- # read -r var val 00:14:54.557 11:02:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:54.557 11:02:22 -- accel/accel.sh@12 -- # build_accel_config 00:14:54.557 11:02:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:54.557 11:02:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:54.557 11:02:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:54.557 11:02:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:54.557 11:02:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:54.557 11:02:22 -- accel/accel.sh@40 -- # local IFS=, 00:14:54.557 11:02:22 -- accel/accel.sh@41 -- # jq -r . 00:14:54.557 [2024-04-18 11:02:23.006356] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:54.557 [2024-04-18 11:02:23.006439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77044 ] 00:14:54.557 [2024-04-18 11:02:23.145364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.814 [2024-04-18 11:02:23.241636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val=0x1 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val=compress 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@23 -- # accel_opc=compress 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val=software 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@22 -- # accel_module=software 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.814 11:02:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:54.814 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.814 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.815 11:02:23 -- accel/accel.sh@20 -- # val=32 00:14:54.815 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.815 11:02:23 -- accel/accel.sh@20 -- # val=32 00:14:54.815 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.815 11:02:23 -- accel/accel.sh@20 -- # val=1 00:14:54.815 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.815 11:02:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:54.815 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.815 11:02:23 -- accel/accel.sh@20 -- # val=No 00:14:54.815 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.815 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.815 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:54.815 11:02:23 -- accel/accel.sh@20 -- # val= 00:14:54.815 11:02:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # IFS=: 00:14:54.815 11:02:23 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.189 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.189 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.189 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.189 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.189 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.189 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.189 ************************************ 00:14:56.189 END TEST accel_comp 00:14:56.189 ************************************ 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:56.189 11:02:24 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:14:56.189 11:02:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:56.189 00:14:56.189 real 0m1.490s 00:14:56.189 user 0m1.277s 00:14:56.189 sys 0m0.119s 00:14:56.189 11:02:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.189 11:02:24 -- common/autotest_common.sh@10 -- # set +x 00:14:56.189 11:02:24 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:56.189 11:02:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:56.189 11:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.189 11:02:24 -- common/autotest_common.sh@10 -- # set +x 00:14:56.189 ************************************ 00:14:56.189 START TEST accel_decomp 00:14:56.189 ************************************ 00:14:56.189 11:02:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:56.189 11:02:24 -- accel/accel.sh@16 -- # local accel_opc 00:14:56.189 11:02:24 -- accel/accel.sh@17 -- # local accel_module 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.189 11:02:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:56.189 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.189 11:02:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:56.189 11:02:24 -- accel/accel.sh@12 -- # build_accel_config 00:14:56.189 11:02:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:56.189 11:02:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:56.189 11:02:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:56.189 11:02:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:56.190 11:02:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:56.190 11:02:24 -- accel/accel.sh@40 -- # local IFS=, 00:14:56.190 11:02:24 -- accel/accel.sh@41 -- # jq -r . 00:14:56.190 [2024-04-18 11:02:24.608598] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:56.190 [2024-04-18 11:02:24.608736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77088 ] 00:14:56.190 [2024-04-18 11:02:24.744765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.449 [2024-04-18 11:02:24.851085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=0x1 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=decompress 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=software 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@22 -- # accel_module=software 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=32 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=32 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=1 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val=Yes 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:56.449 11:02:24 -- accel/accel.sh@20 -- # val= 00:14:56.449 11:02:24 -- accel/accel.sh@21 -- # case "$var" in 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # IFS=: 00:14:56.449 11:02:24 -- accel/accel.sh@19 -- # read -r var val 00:14:57.826 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:57.826 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:57.826 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:57.826 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:57.826 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:57.826 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:57.826 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:57.826 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:57.826 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:57.826 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:57.826 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:57.827 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:57.827 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:57.827 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:57.827 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:57.827 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:57.827 11:02:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:57.827 11:02:26 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:57.827 11:02:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:57.827 00:14:57.827 real 0m1.498s 00:14:57.827 user 0m1.273s 00:14:57.827 sys 0m0.130s 00:14:57.827 11:02:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:57.827 ************************************ 00:14:57.827 END TEST accel_decomp 00:14:57.827 ************************************ 00:14:57.827 11:02:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.827 11:02:26 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:57.827 11:02:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:57.827 11:02:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:57.827 11:02:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.827 ************************************ 00:14:57.827 START TEST accel_decmop_full 00:14:57.827 ************************************ 00:14:57.827 11:02:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:57.827 11:02:26 -- accel/accel.sh@16 -- # local accel_opc 00:14:57.827 11:02:26 -- accel/accel.sh@17 -- # local accel_module 00:14:57.827 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:57.827 11:02:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:57.827 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:57.827 11:02:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:57.827 11:02:26 -- accel/accel.sh@12 -- # build_accel_config 00:14:57.827 11:02:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:57.827 11:02:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:57.827 11:02:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:57.827 11:02:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:57.827 11:02:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:57.827 11:02:26 -- accel/accel.sh@40 -- # local IFS=, 00:14:57.827 11:02:26 -- accel/accel.sh@41 -- # jq -r . 00:14:57.827 [2024-04-18 11:02:26.210687] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:57.827 [2024-04-18 11:02:26.210767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77128 ] 00:14:57.827 [2024-04-18 11:02:26.342655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.827 [2024-04-18 11:02:26.423640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.085 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.085 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.085 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.085 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.085 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.085 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.085 11:02:26 -- accel/accel.sh@20 -- # val=0x1 00:14:58.085 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.085 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.085 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val=decompress 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val=software 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@22 -- # accel_module=software 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val=32 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val=32 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val=1 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val=Yes 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:58.086 11:02:26 -- accel/accel.sh@20 -- # val= 00:14:58.086 11:02:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # IFS=: 00:14:58.086 11:02:26 -- accel/accel.sh@19 -- # read -r var val 00:14:59.022 11:02:27 -- accel/accel.sh@20 -- # val= 00:14:59.022 11:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # IFS=: 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # read -r var val 00:14:59.022 11:02:27 -- accel/accel.sh@20 -- # val= 00:14:59.022 11:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # IFS=: 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # read -r var val 00:14:59.022 11:02:27 -- accel/accel.sh@20 -- # val= 00:14:59.022 11:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # IFS=: 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # read -r var val 00:14:59.022 11:02:27 -- accel/accel.sh@20 -- # val= 00:14:59.022 11:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # IFS=: 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # read -r var val 00:14:59.022 11:02:27 -- accel/accel.sh@20 -- # val= 00:14:59.022 11:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # IFS=: 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # read -r var val 00:14:59.022 11:02:27 -- accel/accel.sh@20 -- # val= 00:14:59.022 11:02:27 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # IFS=: 00:14:59.022 11:02:27 -- accel/accel.sh@19 -- # read -r var val 00:14:59.022 11:02:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:59.022 11:02:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:59.022 11:02:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:59.022 00:14:59.022 real 0m1.467s 00:14:59.022 user 0m1.260s 00:14:59.022 sys 0m0.114s 00:14:59.022 11:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.022 ************************************ 00:14:59.022 END TEST accel_decmop_full 00:14:59.022 ************************************ 00:14:59.022 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:14:59.327 11:02:27 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:59.327 11:02:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:59.327 11:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.327 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:14:59.327 ************************************ 00:14:59.327 START TEST accel_decomp_mcore 00:14:59.327 ************************************ 00:14:59.327 11:02:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:59.327 11:02:27 -- accel/accel.sh@16 -- # local accel_opc 00:14:59.327 11:02:27 -- accel/accel.sh@17 -- # local accel_module 00:14:59.327 11:02:27 -- accel/accel.sh@19 -- # IFS=: 00:14:59.327 11:02:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:59.327 11:02:27 -- accel/accel.sh@19 -- # read -r var val 00:14:59.327 11:02:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:59.327 11:02:27 -- accel/accel.sh@12 -- # build_accel_config 00:14:59.327 11:02:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:59.327 11:02:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:59.327 11:02:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:59.327 11:02:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:59.327 11:02:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:59.327 11:02:27 -- accel/accel.sh@40 -- # local IFS=, 00:14:59.327 11:02:27 -- accel/accel.sh@41 -- # jq -r . 00:14:59.327 [2024-04-18 11:02:27.807995] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:59.327 [2024-04-18 11:02:27.808104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77166 ] 00:14:59.327 [2024-04-18 11:02:27.946658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.586 [2024-04-18 11:02:28.049345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.586 [2024-04-18 11:02:28.049410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.586 [2024-04-18 11:02:28.049540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.586 [2024-04-18 11:02:28.049543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=0xf 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=decompress 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=software 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@22 -- # accel_module=software 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=32 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=32 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=1 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val=Yes 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:14:59.586 11:02:28 -- accel/accel.sh@20 -- # val= 00:14:59.586 11:02:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # IFS=: 00:14:59.586 11:02:28 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:00.962 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:00.962 11:02:29 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:00.962 11:02:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:00.962 00:15:00.962 real 0m1.511s 00:15:00.962 user 0m4.710s 00:15:00.962 sys 0m0.136s 00:15:00.962 11:02:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:00.962 11:02:29 -- common/autotest_common.sh@10 -- # set +x 00:15:00.962 ************************************ 00:15:00.962 END TEST accel_decomp_mcore 00:15:00.962 ************************************ 00:15:00.962 11:02:29 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:00.962 11:02:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:00.962 11:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:00.962 11:02:29 -- common/autotest_common.sh@10 -- # set +x 00:15:00.962 ************************************ 00:15:00.962 START TEST accel_decomp_full_mcore 00:15:00.962 ************************************ 00:15:00.962 11:02:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:00.962 11:02:29 -- accel/accel.sh@16 -- # local accel_opc 00:15:00.962 11:02:29 -- accel/accel.sh@17 -- # local accel_module 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:00.962 11:02:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:00.962 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:00.962 11:02:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:00.962 11:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:15:00.962 11:02:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:00.962 11:02:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:00.962 11:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:00.962 11:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:00.962 11:02:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:00.962 11:02:29 -- accel/accel.sh@40 -- # local IFS=, 00:15:00.962 11:02:29 -- accel/accel.sh@41 -- # jq -r . 00:15:00.962 [2024-04-18 11:02:29.420240] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:00.962 [2024-04-18 11:02:29.420311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77208 ] 00:15:00.962 [2024-04-18 11:02:29.556816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.221 [2024-04-18 11:02:29.659120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.221 [2024-04-18 11:02:29.659196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.221 [2024-04-18 11:02:29.659289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.221 [2024-04-18 11:02:29.659289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=0xf 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=decompress 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=software 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@22 -- # accel_module=software 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=32 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=32 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=1 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val=Yes 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:01.221 11:02:29 -- accel/accel.sh@20 -- # val= 00:15:01.221 11:02:29 -- accel/accel.sh@21 -- # case "$var" in 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # IFS=: 00:15:01.221 11:02:29 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 ************************************ 00:15:02.597 END TEST accel_decomp_full_mcore 00:15:02.597 ************************************ 00:15:02.597 11:02:30 -- accel/accel.sh@20 -- # val= 00:15:02.597 11:02:30 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # IFS=: 00:15:02.597 11:02:30 -- accel/accel.sh@19 -- # read -r var val 00:15:02.597 11:02:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:02.597 11:02:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:02.597 11:02:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:02.597 00:15:02.597 real 0m1.506s 00:15:02.598 user 0m4.747s 00:15:02.598 sys 0m0.129s 00:15:02.598 11:02:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.598 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:15:02.598 11:02:30 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:15:02.598 11:02:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:15:02.598 11:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.598 11:02:30 -- common/autotest_common.sh@10 -- # set +x 00:15:02.598 ************************************ 00:15:02.598 START TEST accel_decomp_mthread 00:15:02.598 ************************************ 00:15:02.598 11:02:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:15:02.598 11:02:31 -- accel/accel.sh@16 -- # local accel_opc 00:15:02.598 11:02:31 -- accel/accel.sh@17 -- # local accel_module 00:15:02.598 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.598 11:02:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:15:02.598 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.598 11:02:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:15:02.598 11:02:31 -- accel/accel.sh@12 -- # build_accel_config 00:15:02.598 11:02:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:02.598 11:02:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:02.598 11:02:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:02.598 11:02:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:02.598 11:02:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:02.598 11:02:31 -- accel/accel.sh@40 -- # local IFS=, 00:15:02.598 11:02:31 -- accel/accel.sh@41 -- # jq -r . 00:15:02.598 [2024-04-18 11:02:31.043699] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:02.598 [2024-04-18 11:02:31.044163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77249 ] 00:15:02.598 [2024-04-18 11:02:31.179415] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.856 [2024-04-18 11:02:31.279952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val=0x1 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val=decompress 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.856 11:02:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:02.856 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.856 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val=software 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@22 -- # accel_module=software 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val=32 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val=32 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val=2 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val=Yes 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:02.857 11:02:31 -- accel/accel.sh@20 -- # val= 00:15:02.857 11:02:31 -- accel/accel.sh@21 -- # case "$var" in 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # IFS=: 00:15:02.857 11:02:31 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@20 -- # val= 00:15:04.233 11:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@20 -- # val= 00:15:04.233 11:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@20 -- # val= 00:15:04.233 11:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@20 -- # val= 00:15:04.233 11:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@20 -- # val= 00:15:04.233 11:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@20 -- # val= 00:15:04.233 11:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@20 -- # val= 00:15:04.233 11:02:32 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:04.233 11:02:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:04.233 11:02:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:04.233 00:15:04.233 real 0m1.577s 00:15:04.233 user 0m1.364s 00:15:04.233 sys 0m0.116s 00:15:04.233 11:02:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.233 ************************************ 00:15:04.233 END TEST accel_decomp_mthread 00:15:04.233 ************************************ 00:15:04.233 11:02:32 -- common/autotest_common.sh@10 -- # set +x 00:15:04.233 11:02:32 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:15:04.233 11:02:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:04.233 11:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.233 11:02:32 -- common/autotest_common.sh@10 -- # set +x 00:15:04.233 ************************************ 00:15:04.233 START TEST accel_deomp_full_mthread 00:15:04.233 ************************************ 00:15:04.233 11:02:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:15:04.233 11:02:32 -- accel/accel.sh@16 -- # local accel_opc 00:15:04.233 11:02:32 -- accel/accel.sh@17 -- # local accel_module 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # IFS=: 00:15:04.233 11:02:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:15:04.233 11:02:32 -- accel/accel.sh@19 -- # read -r var val 00:15:04.233 11:02:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:15:04.233 11:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:15:04.233 11:02:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:04.233 11:02:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:04.233 11:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:04.233 11:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:04.233 11:02:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:04.233 11:02:32 -- accel/accel.sh@40 -- # local IFS=, 00:15:04.233 11:02:32 -- accel/accel.sh@41 -- # jq -r . 00:15:04.233 [2024-04-18 11:02:32.744297] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:04.233 [2024-04-18 11:02:32.744398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77289 ] 00:15:04.492 [2024-04-18 11:02:32.882016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.492 [2024-04-18 11:02:32.996828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=0x1 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=decompress 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=software 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@22 -- # accel_module=software 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=32 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=32 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=2 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val=Yes 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:04.492 11:02:33 -- accel/accel.sh@20 -- # val= 00:15:04.492 11:02:33 -- accel/accel.sh@21 -- # case "$var" in 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # IFS=: 00:15:04.492 11:02:33 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@20 -- # val= 00:15:05.895 11:02:34 -- accel/accel.sh@21 -- # case "$var" in 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # IFS=: 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@20 -- # val= 00:15:05.895 11:02:34 -- accel/accel.sh@21 -- # case "$var" in 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # IFS=: 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@20 -- # val= 00:15:05.895 11:02:34 -- accel/accel.sh@21 -- # case "$var" in 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # IFS=: 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@20 -- # val= 00:15:05.895 11:02:34 -- accel/accel.sh@21 -- # case "$var" in 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # IFS=: 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@20 -- # val= 00:15:05.895 11:02:34 -- accel/accel.sh@21 -- # case "$var" in 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # IFS=: 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@20 -- # val= 00:15:05.895 11:02:34 -- accel/accel.sh@21 -- # case "$var" in 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # IFS=: 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@20 -- # val= 00:15:05.895 11:02:34 -- accel/accel.sh@21 -- # case "$var" in 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # IFS=: 00:15:05.895 11:02:34 -- accel/accel.sh@19 -- # read -r var val 00:15:05.895 11:02:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:05.895 11:02:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:05.895 ************************************ 00:15:05.895 END TEST accel_deomp_full_mthread 00:15:05.895 ************************************ 00:15:05.895 11:02:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:05.895 00:15:05.895 real 0m1.530s 00:15:05.895 user 0m1.300s 00:15:05.895 sys 0m0.134s 00:15:05.895 11:02:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:05.895 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:15:05.895 11:02:34 -- accel/accel.sh@124 -- # [[ n == y ]] 00:15:05.895 11:02:34 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:05.895 11:02:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:05.895 11:02:34 -- accel/accel.sh@137 -- # build_accel_config 00:15:05.895 11:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.895 11:02:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:05.895 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:15:05.895 11:02:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:05.895 11:02:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:05.895 11:02:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:05.896 11:02:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:05.896 11:02:34 -- accel/accel.sh@40 -- # local IFS=, 00:15:05.896 11:02:34 -- accel/accel.sh@41 -- # jq -r . 00:15:05.896 ************************************ 00:15:05.896 START TEST accel_dif_functional_tests 00:15:05.896 ************************************ 00:15:05.896 11:02:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:05.896 [2024-04-18 11:02:34.403651] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:05.896 [2024-04-18 11:02:34.403748] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77335 ] 00:15:06.154 [2024-04-18 11:02:34.541533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.154 [2024-04-18 11:02:34.649125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.154 [2024-04-18 11:02:34.649255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.154 [2024-04-18 11:02:34.649257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.154 00:15:06.154 00:15:06.154 CUnit - A unit testing framework for C - Version 2.1-3 00:15:06.155 http://cunit.sourceforge.net/ 00:15:06.155 00:15:06.155 00:15:06.155 Suite: accel_dif 00:15:06.155 Test: verify: DIF generated, GUARD check ...passed 00:15:06.155 Test: verify: DIF generated, APPTAG check ...passed 00:15:06.155 Test: verify: DIF generated, REFTAG check ...passed 00:15:06.155 Test: verify: DIF not generated, GUARD check ...[2024-04-18 11:02:34.738922] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:06.155 passed 00:15:06.155 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 11:02:34.739232] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:06.155 [2024-04-18 11:02:34.739352] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:06.155 passed 00:15:06.155 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 11:02:34.739467] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:06.155 [2024-04-18 11:02:34.739566] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:06.155 passed 00:15:06.155 Test: verify: APPTAG correct, APPTAG check ...[2024-04-18 11:02:34.739684] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:06.155 passed 00:15:06.155 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 11:02:34.739915] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:15:06.155 passed 00:15:06.155 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:15:06.155 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:15:06.155 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:15:06.155 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 11:02:34.740268] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:15:06.155 passed 00:15:06.155 Test: generate copy: DIF generated, GUARD check ...passed 00:15:06.155 Test: generate copy: DIF generated, APTTAG check ...passed 00:15:06.155 Test: generate copy: DIF generated, REFTAG check ...passed 00:15:06.155 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:15:06.155 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:15:06.155 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:15:06.155 Test: generate copy: iovecs-len validate ...[2024-04-18 11:02:34.740972] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:15:06.155 passed 00:15:06.155 Test: generate copy: buffer alignment validate ...passed 00:15:06.155 00:15:06.155 Run Summary: Type Total Ran Passed Failed Inactive 00:15:06.155 suites 1 1 n/a 0 0 00:15:06.155 tests 20 20 20 0 0 00:15:06.155 asserts 204 204 204 0 n/a 00:15:06.155 00:15:06.155 Elapsed time = 0.006 seconds 00:15:06.413 ************************************ 00:15:06.413 END TEST accel_dif_functional_tests 00:15:06.413 ************************************ 00:15:06.413 00:15:06.413 real 0m0.603s 00:15:06.413 user 0m0.749s 00:15:06.413 sys 0m0.155s 00:15:06.413 11:02:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.413 11:02:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.414 ************************************ 00:15:06.414 END TEST accel 00:15:06.414 ************************************ 00:15:06.414 00:15:06.414 real 0m35.869s 00:15:06.414 user 0m36.533s 00:15:06.414 sys 0m4.698s 00:15:06.414 11:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.414 11:02:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.414 11:02:35 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:15:06.414 11:02:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:06.414 11:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.414 11:02:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.672 ************************************ 00:15:06.672 START TEST accel_rpc 00:15:06.672 ************************************ 00:15:06.672 11:02:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:15:06.672 * Looking for test storage... 00:15:06.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:15:06.672 11:02:35 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:06.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.672 11:02:35 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77405 00:15:06.672 11:02:35 -- accel/accel_rpc.sh@15 -- # waitforlisten 77405 00:15:06.672 11:02:35 -- common/autotest_common.sh@817 -- # '[' -z 77405 ']' 00:15:06.672 11:02:35 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:06.672 11:02:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.672 11:02:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:06.672 11:02:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.672 11:02:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:06.672 11:02:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.672 [2024-04-18 11:02:35.244532] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:06.672 [2024-04-18 11:02:35.244645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77405 ] 00:15:06.930 [2024-04-18 11:02:35.380862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.930 [2024-04-18 11:02:35.465690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.865 11:02:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:07.865 11:02:36 -- common/autotest_common.sh@850 -- # return 0 00:15:07.865 11:02:36 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:15:07.866 11:02:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:07.866 11:02:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.866 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:15:07.866 ************************************ 00:15:07.866 START TEST accel_assign_opcode 00:15:07.866 ************************************ 00:15:07.866 11:02:36 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:15:07.866 11:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.866 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:15:07.866 [2024-04-18 11:02:36.254297] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:15:07.866 11:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:15:07.866 11:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.866 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:15:07.866 [2024-04-18 11:02:36.262257] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:15:07.866 11:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:15:07.866 11:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.866 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:15:07.866 11:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:15:07.866 11:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:15:07.866 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:15:07.866 11:02:36 -- accel/accel_rpc.sh@42 -- # grep software 00:15:07.866 11:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.125 software 00:15:08.125 ************************************ 00:15:08.125 END TEST accel_assign_opcode 00:15:08.125 ************************************ 00:15:08.125 00:15:08.125 real 0m0.296s 00:15:08.125 user 0m0.051s 00:15:08.125 sys 0m0.016s 00:15:08.125 11:02:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:08.125 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:15:08.125 11:02:36 -- accel/accel_rpc.sh@55 -- # killprocess 77405 00:15:08.125 11:02:36 -- common/autotest_common.sh@936 -- # '[' -z 77405 ']' 00:15:08.125 11:02:36 -- common/autotest_common.sh@940 -- # kill -0 77405 00:15:08.125 11:02:36 -- common/autotest_common.sh@941 -- # uname 00:15:08.125 11:02:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.125 11:02:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77405 00:15:08.125 killing process with pid 77405 00:15:08.125 11:02:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:08.125 11:02:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:08.125 11:02:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77405' 00:15:08.125 11:02:36 -- common/autotest_common.sh@955 -- # kill 77405 00:15:08.125 11:02:36 -- common/autotest_common.sh@960 -- # wait 77405 00:15:08.384 00:15:08.384 real 0m1.873s 00:15:08.384 user 0m1.946s 00:15:08.384 sys 0m0.481s 00:15:08.384 11:02:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:08.384 ************************************ 00:15:08.384 END TEST accel_rpc 00:15:08.384 ************************************ 00:15:08.384 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:15:08.384 11:02:37 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:08.384 11:02:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:08.384 11:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.384 11:02:37 -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 ************************************ 00:15:08.645 START TEST app_cmdline 00:15:08.645 ************************************ 00:15:08.645 11:02:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:08.645 * Looking for test storage... 00:15:08.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:08.646 11:02:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:15:08.646 11:02:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=77525 00:15:08.646 11:02:37 -- app/cmdline.sh@18 -- # waitforlisten 77525 00:15:08.646 11:02:37 -- common/autotest_common.sh@817 -- # '[' -z 77525 ']' 00:15:08.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.646 11:02:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.646 11:02:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.646 11:02:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.646 11:02:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.646 11:02:37 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:15:08.646 11:02:37 -- common/autotest_common.sh@10 -- # set +x 00:15:08.646 [2024-04-18 11:02:37.254183] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:08.646 [2024-04-18 11:02:37.254858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77525 ] 00:15:08.904 [2024-04-18 11:02:37.392616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.904 [2024-04-18 11:02:37.495645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.839 11:02:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.839 11:02:38 -- common/autotest_common.sh@850 -- # return 0 00:15:09.839 11:02:38 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:15:09.839 { 00:15:09.839 "fields": { 00:15:09.839 "commit": "65b4e17c6", 00:15:09.839 "major": 24, 00:15:09.839 "minor": 5, 00:15:09.839 "patch": 0, 00:15:09.839 "suffix": "-pre" 00:15:09.839 }, 00:15:09.839 "version": "SPDK v24.05-pre git sha1 65b4e17c6" 00:15:09.839 } 00:15:10.098 11:02:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:15:10.098 11:02:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:15:10.098 11:02:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:15:10.098 11:02:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:15:10.098 11:02:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:15:10.098 11:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.098 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:15:10.098 11:02:38 -- app/cmdline.sh@26 -- # sort 00:15:10.098 11:02:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:15:10.098 11:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.098 11:02:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:15:10.098 11:02:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:15:10.098 11:02:38 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:10.098 11:02:38 -- common/autotest_common.sh@638 -- # local es=0 00:15:10.098 11:02:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:10.098 11:02:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.098 11:02:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.098 11:02:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.098 11:02:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.098 11:02:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.098 11:02:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.098 11:02:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.098 11:02:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:10.099 11:02:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:10.357 2024/04/18 11:02:38 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:15:10.357 request: 00:15:10.357 { 00:15:10.357 "method": "env_dpdk_get_mem_stats", 00:15:10.357 "params": {} 00:15:10.357 } 00:15:10.357 Got JSON-RPC error response 00:15:10.357 GoRPCClient: error on JSON-RPC call 00:15:10.357 11:02:38 -- common/autotest_common.sh@641 -- # es=1 00:15:10.357 11:02:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.357 11:02:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.357 11:02:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.357 11:02:38 -- app/cmdline.sh@1 -- # killprocess 77525 00:15:10.357 11:02:38 -- common/autotest_common.sh@936 -- # '[' -z 77525 ']' 00:15:10.357 11:02:38 -- common/autotest_common.sh@940 -- # kill -0 77525 00:15:10.357 11:02:38 -- common/autotest_common.sh@941 -- # uname 00:15:10.357 11:02:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.357 11:02:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77525 00:15:10.357 11:02:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:10.357 11:02:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:10.357 killing process with pid 77525 00:15:10.357 11:02:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77525' 00:15:10.357 11:02:38 -- common/autotest_common.sh@955 -- # kill 77525 00:15:10.357 11:02:38 -- common/autotest_common.sh@960 -- # wait 77525 00:15:10.616 00:15:10.616 real 0m2.108s 00:15:10.616 user 0m2.598s 00:15:10.616 sys 0m0.507s 00:15:10.616 11:02:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:10.616 ************************************ 00:15:10.616 END TEST app_cmdline 00:15:10.616 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:10.616 ************************************ 00:15:10.616 11:02:39 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:10.616 11:02:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:10.616 11:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.616 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 ************************************ 00:15:10.874 START TEST version 00:15:10.874 ************************************ 00:15:10.874 11:02:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:10.874 * Looking for test storage... 00:15:10.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:10.874 11:02:39 -- app/version.sh@17 -- # get_header_version major 00:15:10.874 11:02:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:10.874 11:02:39 -- app/version.sh@14 -- # cut -f2 00:15:10.874 11:02:39 -- app/version.sh@14 -- # tr -d '"' 00:15:10.874 11:02:39 -- app/version.sh@17 -- # major=24 00:15:10.874 11:02:39 -- app/version.sh@18 -- # get_header_version minor 00:15:10.874 11:02:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:10.874 11:02:39 -- app/version.sh@14 -- # cut -f2 00:15:10.874 11:02:39 -- app/version.sh@14 -- # tr -d '"' 00:15:10.874 11:02:39 -- app/version.sh@18 -- # minor=5 00:15:10.874 11:02:39 -- app/version.sh@19 -- # get_header_version patch 00:15:10.874 11:02:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:10.874 11:02:39 -- app/version.sh@14 -- # cut -f2 00:15:10.874 11:02:39 -- app/version.sh@14 -- # tr -d '"' 00:15:10.874 11:02:39 -- app/version.sh@19 -- # patch=0 00:15:10.874 11:02:39 -- app/version.sh@20 -- # get_header_version suffix 00:15:10.874 11:02:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:10.874 11:02:39 -- app/version.sh@14 -- # cut -f2 00:15:10.874 11:02:39 -- app/version.sh@14 -- # tr -d '"' 00:15:10.874 11:02:39 -- app/version.sh@20 -- # suffix=-pre 00:15:10.874 11:02:39 -- app/version.sh@22 -- # version=24.5 00:15:10.874 11:02:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:15:10.874 11:02:39 -- app/version.sh@28 -- # version=24.5rc0 00:15:10.874 11:02:39 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:10.874 11:02:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:15:10.874 11:02:39 -- app/version.sh@30 -- # py_version=24.5rc0 00:15:10.874 11:02:39 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:15:10.874 00:15:10.874 real 0m0.159s 00:15:10.874 user 0m0.091s 00:15:10.874 sys 0m0.097s 00:15:10.874 11:02:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:10.874 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:10.874 ************************************ 00:15:10.874 END TEST version 00:15:10.874 ************************************ 00:15:11.133 11:02:39 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:15:11.133 11:02:39 -- spdk/autotest.sh@194 -- # uname -s 00:15:11.133 11:02:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:15:11.133 11:02:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:11.133 11:02:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:11.133 11:02:39 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:15:11.133 11:02:39 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:15:11.133 11:02:39 -- spdk/autotest.sh@258 -- # timing_exit lib 00:15:11.133 11:02:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:11.133 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:11.133 11:02:39 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:15:11.133 11:02:39 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:15:11.133 11:02:39 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:15:11.133 11:02:39 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:15:11.133 11:02:39 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:15:11.133 11:02:39 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:15:11.134 11:02:39 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:11.134 11:02:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:11.134 11:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.134 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:11.134 ************************************ 00:15:11.134 START TEST nvmf_tcp 00:15:11.134 ************************************ 00:15:11.134 11:02:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:11.134 * Looking for test storage... 00:15:11.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@10 -- # uname -s 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.134 11:02:39 -- nvmf/common.sh@7 -- # uname -s 00:15:11.134 11:02:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.134 11:02:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.134 11:02:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.134 11:02:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.134 11:02:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.134 11:02:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.134 11:02:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.134 11:02:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.134 11:02:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.134 11:02:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.134 11:02:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:11.134 11:02:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:11.134 11:02:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.134 11:02:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.134 11:02:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.134 11:02:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.134 11:02:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.134 11:02:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.134 11:02:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.134 11:02:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.134 11:02:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.134 11:02:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.134 11:02:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.134 11:02:39 -- paths/export.sh@5 -- # export PATH 00:15:11.134 11:02:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.134 11:02:39 -- nvmf/common.sh@47 -- # : 0 00:15:11.134 11:02:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.134 11:02:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.134 11:02:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.134 11:02:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.134 11:02:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.134 11:02:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.134 11:02:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.134 11:02:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:15:11.134 11:02:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:11.134 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:15:11.134 11:02:39 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:11.134 11:02:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:11.134 11:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.134 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:11.421 ************************************ 00:15:11.421 START TEST nvmf_example 00:15:11.421 ************************************ 00:15:11.421 11:02:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:11.421 * Looking for test storage... 00:15:11.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:11.421 11:02:39 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.421 11:02:39 -- nvmf/common.sh@7 -- # uname -s 00:15:11.421 11:02:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.421 11:02:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.421 11:02:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.421 11:02:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.421 11:02:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.421 11:02:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.421 11:02:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.421 11:02:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.421 11:02:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.421 11:02:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.421 11:02:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:11.421 11:02:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:11.421 11:02:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.421 11:02:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.421 11:02:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.421 11:02:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.421 11:02:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.421 11:02:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.421 11:02:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.421 11:02:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.421 11:02:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.421 11:02:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.421 11:02:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.421 11:02:39 -- paths/export.sh@5 -- # export PATH 00:15:11.421 11:02:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.421 11:02:39 -- nvmf/common.sh@47 -- # : 0 00:15:11.421 11:02:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.421 11:02:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.421 11:02:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.421 11:02:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.421 11:02:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.421 11:02:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.421 11:02:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.421 11:02:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.421 11:02:39 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:11.421 11:02:39 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:11.421 11:02:39 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:11.421 11:02:39 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:11.421 11:02:39 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:11.421 11:02:39 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:11.421 11:02:39 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:11.421 11:02:39 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:11.421 11:02:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:11.421 11:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:11.421 11:02:39 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:11.421 11:02:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:11.421 11:02:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.421 11:02:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:11.421 11:02:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:11.421 11:02:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:11.421 11:02:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.421 11:02:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.421 11:02:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.421 11:02:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:11.421 11:02:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:11.421 11:02:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:11.421 11:02:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:11.421 11:02:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:11.421 11:02:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:11.421 11:02:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.421 11:02:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.421 11:02:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.421 11:02:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:11.421 11:02:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.421 11:02:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.421 11:02:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.421 11:02:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.421 11:02:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.421 11:02:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.421 11:02:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.421 11:02:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.421 11:02:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:11.421 Cannot find device "nvmf_init_br" 00:15:11.421 11:02:39 -- nvmf/common.sh@154 -- # true 00:15:11.421 11:02:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:11.421 Cannot find device "nvmf_tgt_br" 00:15:11.421 11:02:39 -- nvmf/common.sh@155 -- # true 00:15:11.421 11:02:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.421 Cannot find device "nvmf_tgt_br2" 00:15:11.421 11:02:39 -- nvmf/common.sh@156 -- # true 00:15:11.421 11:02:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:11.421 Cannot find device "nvmf_init_br" 00:15:11.421 11:02:39 -- nvmf/common.sh@157 -- # true 00:15:11.421 11:02:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:11.421 Cannot find device "nvmf_tgt_br" 00:15:11.421 11:02:39 -- nvmf/common.sh@158 -- # true 00:15:11.421 11:02:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:11.421 Cannot find device "nvmf_tgt_br2" 00:15:11.421 11:02:40 -- nvmf/common.sh@159 -- # true 00:15:11.421 11:02:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:11.421 Cannot find device "nvmf_br" 00:15:11.421 11:02:40 -- nvmf/common.sh@160 -- # true 00:15:11.421 11:02:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:11.421 Cannot find device "nvmf_init_if" 00:15:11.421 11:02:40 -- nvmf/common.sh@161 -- # true 00:15:11.421 11:02:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.421 11:02:40 -- nvmf/common.sh@162 -- # true 00:15:11.421 11:02:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.421 11:02:40 -- nvmf/common.sh@163 -- # true 00:15:11.421 11:02:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.421 11:02:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.680 11:02:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.680 11:02:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.680 11:02:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.680 11:02:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.680 11:02:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.680 11:02:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.680 11:02:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.680 11:02:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:11.680 11:02:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:11.680 11:02:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:11.680 11:02:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:11.680 11:02:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.680 11:02:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.680 11:02:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.680 11:02:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:11.680 11:02:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:11.680 11:02:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.680 11:02:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.680 11:02:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.680 11:02:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.680 11:02:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.680 11:02:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:11.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:15:11.680 00:15:11.680 --- 10.0.0.2 ping statistics --- 00:15:11.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.680 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:11.680 11:02:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:11.680 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.680 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:11.680 00:15:11.680 --- 10.0.0.3 ping statistics --- 00:15:11.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.680 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:11.680 11:02:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:11.680 00:15:11.680 --- 10.0.0.1 ping statistics --- 00:15:11.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.680 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:11.939 11:02:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.939 11:02:40 -- nvmf/common.sh@422 -- # return 0 00:15:11.939 11:02:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:11.939 11:02:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.939 11:02:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:11.939 11:02:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:11.939 11:02:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.939 11:02:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:11.939 11:02:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:11.939 11:02:40 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:11.939 11:02:40 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:11.939 11:02:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:11.939 11:02:40 -- common/autotest_common.sh@10 -- # set +x 00:15:11.939 11:02:40 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:11.939 11:02:40 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:11.939 11:02:40 -- target/nvmf_example.sh@34 -- # nvmfpid=77896 00:15:11.939 11:02:40 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.939 11:02:40 -- target/nvmf_example.sh@36 -- # waitforlisten 77896 00:15:11.939 11:02:40 -- common/autotest_common.sh@817 -- # '[' -z 77896 ']' 00:15:11.939 11:02:40 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:11.939 11:02:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.939 11:02:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.939 11:02:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.939 11:02:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.939 11:02:40 -- common/autotest_common.sh@10 -- # set +x 00:15:12.874 11:02:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.874 11:02:41 -- common/autotest_common.sh@850 -- # return 0 00:15:12.874 11:02:41 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:12.874 11:02:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:12.874 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:15:12.874 11:02:41 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.874 11:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.874 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:15:12.874 11:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.874 11:02:41 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:12.874 11:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.874 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:15:12.874 11:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.874 11:02:41 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:12.874 11:02:41 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.874 11:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.874 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:15:12.874 11:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.874 11:02:41 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:12.874 11:02:41 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.874 11:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.874 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:15:12.874 11:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.874 11:02:41 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.874 11:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.874 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:15:12.874 11:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.874 11:02:41 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:12.874 11:02:41 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:25.074 Initializing NVMe Controllers 00:15:25.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:25.074 Initialization complete. Launching workers. 00:15:25.074 ======================================================== 00:15:25.074 Latency(us) 00:15:25.074 Device Information : IOPS MiB/s Average min max 00:15:25.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14726.30 57.52 4348.49 770.58 20184.14 00:15:25.074 ======================================================== 00:15:25.074 Total : 14726.30 57.52 4348.49 770.58 20184.14 00:15:25.074 00:15:25.074 11:02:51 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:25.074 11:02:51 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:25.074 11:02:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:25.074 11:02:51 -- nvmf/common.sh@117 -- # sync 00:15:25.074 11:02:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.074 11:02:51 -- nvmf/common.sh@120 -- # set +e 00:15:25.074 11:02:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.074 11:02:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.074 rmmod nvme_tcp 00:15:25.074 rmmod nvme_fabrics 00:15:25.074 rmmod nvme_keyring 00:15:25.074 11:02:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.074 11:02:51 -- nvmf/common.sh@124 -- # set -e 00:15:25.074 11:02:51 -- nvmf/common.sh@125 -- # return 0 00:15:25.074 11:02:51 -- nvmf/common.sh@478 -- # '[' -n 77896 ']' 00:15:25.074 11:02:51 -- nvmf/common.sh@479 -- # killprocess 77896 00:15:25.074 11:02:51 -- common/autotest_common.sh@936 -- # '[' -z 77896 ']' 00:15:25.074 11:02:51 -- common/autotest_common.sh@940 -- # kill -0 77896 00:15:25.074 11:02:51 -- common/autotest_common.sh@941 -- # uname 00:15:25.074 11:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:25.074 11:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77896 00:15:25.074 11:02:51 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:15:25.074 11:02:51 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:15:25.074 killing process with pid 77896 00:15:25.074 11:02:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77896' 00:15:25.074 11:02:51 -- common/autotest_common.sh@955 -- # kill 77896 00:15:25.074 11:02:51 -- common/autotest_common.sh@960 -- # wait 77896 00:15:25.074 nvmf threads initialize successfully 00:15:25.074 bdev subsystem init successfully 00:15:25.074 created a nvmf target service 00:15:25.074 create targets's poll groups done 00:15:25.074 all subsystems of target started 00:15:25.074 nvmf target is running 00:15:25.074 all subsystems of target stopped 00:15:25.074 destroy targets's poll groups done 00:15:25.074 destroyed the nvmf target service 00:15:25.074 bdev subsystem finish successfully 00:15:25.074 nvmf threads destroy successfully 00:15:25.074 11:02:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:25.074 11:02:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:25.074 11:02:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:25.074 11:02:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.074 11:02:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.074 11:02:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.074 11:02:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.074 11:02:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.074 11:02:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:25.074 11:02:52 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:25.074 11:02:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:25.074 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:15:25.074 00:15:25.074 real 0m12.342s 00:15:25.074 user 0m44.149s 00:15:25.074 sys 0m2.034s 00:15:25.074 11:02:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:25.074 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:15:25.074 ************************************ 00:15:25.074 END TEST nvmf_example 00:15:25.074 ************************************ 00:15:25.074 11:02:52 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:25.074 11:02:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.074 11:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.074 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:15:25.074 ************************************ 00:15:25.074 START TEST nvmf_filesystem 00:15:25.074 ************************************ 00:15:25.074 11:02:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:25.074 * Looking for test storage... 00:15:25.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.074 11:02:52 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:25.074 11:02:52 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:25.074 11:02:52 -- common/autotest_common.sh@34 -- # set -e 00:15:25.074 11:02:52 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:25.074 11:02:52 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:25.074 11:02:52 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:25.074 11:02:52 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:25.074 11:02:52 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:25.074 11:02:52 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:25.074 11:02:52 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:25.074 11:02:52 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:25.074 11:02:52 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:25.074 11:02:52 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:15:25.074 11:02:52 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:25.074 11:02:52 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:25.074 11:02:52 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:25.074 11:02:52 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:25.074 11:02:52 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:25.074 11:02:52 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:25.074 11:02:52 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:25.074 11:02:52 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:25.074 11:02:52 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:25.074 11:02:52 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:25.074 11:02:52 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:25.074 11:02:52 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:15:25.074 11:02:52 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:25.074 11:02:52 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:25.074 11:02:52 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:15:25.074 11:02:52 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:15:25.074 11:02:52 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:15:25.074 11:02:52 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:25.074 11:02:52 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:15:25.075 11:02:52 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:15:25.075 11:02:52 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:25.075 11:02:52 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:25.075 11:02:52 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:15:25.075 11:02:52 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:15:25.075 11:02:52 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:15:25.075 11:02:52 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:15:25.075 11:02:52 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:15:25.075 11:02:52 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:15:25.075 11:02:52 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:15:25.075 11:02:52 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:15:25.075 11:02:52 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:15:25.075 11:02:52 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:15:25.075 11:02:52 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:15:25.075 11:02:52 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:15:25.075 11:02:52 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:15:25.075 11:02:52 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:15:25.075 11:02:52 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:15:25.075 11:02:52 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:15:25.075 11:02:52 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:25.075 11:02:52 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:15:25.075 11:02:52 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:15:25.075 11:02:52 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:15:25.075 11:02:52 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:25.075 11:02:52 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:15:25.075 11:02:52 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:15:25.075 11:02:52 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:15:25.075 11:02:52 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:15:25.075 11:02:52 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:15:25.075 11:02:52 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:15:25.075 11:02:52 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:15:25.075 11:02:52 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:15:25.075 11:02:52 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:15:25.075 11:02:52 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:15:25.075 11:02:52 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:15:25.075 11:02:52 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:15:25.075 11:02:52 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:15:25.075 11:02:52 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:25.075 11:02:52 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:15:25.075 11:02:52 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:15:25.075 11:02:52 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:15:25.075 11:02:52 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:15:25.075 11:02:52 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:15:25.075 11:02:52 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:25.075 11:02:52 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:15:25.075 11:02:52 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:15:25.075 11:02:52 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:15:25.075 11:02:52 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:15:25.075 11:02:52 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:15:25.075 11:02:52 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:15:25.075 11:02:52 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:15:25.075 11:02:52 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:15:25.075 11:02:52 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:15:25.075 11:02:52 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:15:25.075 11:02:52 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:15:25.075 11:02:52 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:25.075 11:02:52 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:15:25.075 11:02:52 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:15:25.075 11:02:52 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:25.075 11:02:52 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:25.075 11:02:52 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:25.075 11:02:52 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:25.075 11:02:52 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:25.075 11:02:52 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:25.075 11:02:52 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:25.075 11:02:52 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:25.075 11:02:52 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:25.075 11:02:52 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:25.075 11:02:52 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:25.075 11:02:52 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:25.075 11:02:52 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:25.075 11:02:52 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:25.075 11:02:52 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:25.075 11:02:52 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:25.075 #define SPDK_CONFIG_H 00:15:25.075 #define SPDK_CONFIG_APPS 1 00:15:25.075 #define SPDK_CONFIG_ARCH native 00:15:25.075 #undef SPDK_CONFIG_ASAN 00:15:25.075 #define SPDK_CONFIG_AVAHI 1 00:15:25.075 #undef SPDK_CONFIG_CET 00:15:25.075 #define SPDK_CONFIG_COVERAGE 1 00:15:25.075 #define SPDK_CONFIG_CROSS_PREFIX 00:15:25.075 #undef SPDK_CONFIG_CRYPTO 00:15:25.075 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:25.075 #undef SPDK_CONFIG_CUSTOMOCF 00:15:25.075 #undef SPDK_CONFIG_DAOS 00:15:25.075 #define SPDK_CONFIG_DAOS_DIR 00:15:25.075 #define SPDK_CONFIG_DEBUG 1 00:15:25.075 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:25.075 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:15:25.075 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:15:25.075 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:15:25.075 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:25.075 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:25.075 #define SPDK_CONFIG_EXAMPLES 1 00:15:25.075 #undef SPDK_CONFIG_FC 00:15:25.075 #define SPDK_CONFIG_FC_PATH 00:15:25.075 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:25.075 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:25.075 #undef SPDK_CONFIG_FUSE 00:15:25.075 #undef SPDK_CONFIG_FUZZER 00:15:25.075 #define SPDK_CONFIG_FUZZER_LIB 00:15:25.075 #define SPDK_CONFIG_GOLANG 1 00:15:25.075 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:25.075 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:25.075 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:25.075 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:15:25.075 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:25.075 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:25.075 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:25.075 #define SPDK_CONFIG_IDXD 1 00:15:25.075 #undef SPDK_CONFIG_IDXD_KERNEL 00:15:25.075 #undef SPDK_CONFIG_IPSEC_MB 00:15:25.075 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:25.075 #define SPDK_CONFIG_ISAL 1 00:15:25.075 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:25.075 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:25.075 #define SPDK_CONFIG_LIBDIR 00:15:25.075 #undef SPDK_CONFIG_LTO 00:15:25.075 #define SPDK_CONFIG_MAX_LCORES 00:15:25.075 #define SPDK_CONFIG_NVME_CUSE 1 00:15:25.075 #undef SPDK_CONFIG_OCF 00:15:25.075 #define SPDK_CONFIG_OCF_PATH 00:15:25.075 #define SPDK_CONFIG_OPENSSL_PATH 00:15:25.075 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:25.075 #define SPDK_CONFIG_PGO_DIR 00:15:25.075 #undef SPDK_CONFIG_PGO_USE 00:15:25.075 #define SPDK_CONFIG_PREFIX /usr/local 00:15:25.075 #undef SPDK_CONFIG_RAID5F 00:15:25.075 #undef SPDK_CONFIG_RBD 00:15:25.075 #define SPDK_CONFIG_RDMA 1 00:15:25.075 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:25.075 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:25.075 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:25.075 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:25.075 #define SPDK_CONFIG_SHARED 1 00:15:25.075 #undef SPDK_CONFIG_SMA 00:15:25.075 #define SPDK_CONFIG_TESTS 1 00:15:25.075 #undef SPDK_CONFIG_TSAN 00:15:25.075 #define SPDK_CONFIG_UBLK 1 00:15:25.075 #define SPDK_CONFIG_UBSAN 1 00:15:25.075 #undef SPDK_CONFIG_UNIT_TESTS 00:15:25.075 #undef SPDK_CONFIG_URING 00:15:25.075 #define SPDK_CONFIG_URING_PATH 00:15:25.075 #undef SPDK_CONFIG_URING_ZNS 00:15:25.075 #define SPDK_CONFIG_USDT 1 00:15:25.075 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:25.075 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:25.075 #undef SPDK_CONFIG_VFIO_USER 00:15:25.075 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:25.075 #define SPDK_CONFIG_VHOST 1 00:15:25.075 #define SPDK_CONFIG_VIRTIO 1 00:15:25.075 #undef SPDK_CONFIG_VTUNE 00:15:25.075 #define SPDK_CONFIG_VTUNE_DIR 00:15:25.075 #define SPDK_CONFIG_WERROR 1 00:15:25.075 #define SPDK_CONFIG_WPDK_DIR 00:15:25.075 #undef SPDK_CONFIG_XNVME 00:15:25.075 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:25.075 11:02:52 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:25.075 11:02:52 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.075 11:02:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.075 11:02:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.075 11:02:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.075 11:02:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.076 11:02:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.076 11:02:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.076 11:02:52 -- paths/export.sh@5 -- # export PATH 00:15:25.076 11:02:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.076 11:02:52 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:25.076 11:02:52 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:25.076 11:02:52 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:25.076 11:02:52 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:25.076 11:02:52 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:25.076 11:02:52 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:25.076 11:02:52 -- pm/common@67 -- # TEST_TAG=N/A 00:15:25.076 11:02:52 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:25.076 11:02:52 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:25.076 11:02:52 -- pm/common@71 -- # uname -s 00:15:25.076 11:02:52 -- pm/common@71 -- # PM_OS=Linux 00:15:25.076 11:02:52 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:25.076 11:02:52 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:15:25.076 11:02:52 -- pm/common@76 -- # [[ Linux == Linux ]] 00:15:25.076 11:02:52 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:15:25.076 11:02:52 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:15:25.076 11:02:52 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:15:25.076 11:02:52 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:15:25.076 11:02:52 -- common/autotest_common.sh@57 -- # : 1 00:15:25.076 11:02:52 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:15:25.076 11:02:52 -- common/autotest_common.sh@61 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:25.076 11:02:52 -- common/autotest_common.sh@63 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:15:25.076 11:02:52 -- common/autotest_common.sh@65 -- # : 1 00:15:25.076 11:02:52 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:25.076 11:02:52 -- common/autotest_common.sh@67 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:15:25.076 11:02:52 -- common/autotest_common.sh@69 -- # : 00:15:25.076 11:02:52 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:15:25.076 11:02:52 -- common/autotest_common.sh@71 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:15:25.076 11:02:52 -- common/autotest_common.sh@73 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:15:25.076 11:02:52 -- common/autotest_common.sh@75 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:15:25.076 11:02:52 -- common/autotest_common.sh@77 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:25.076 11:02:52 -- common/autotest_common.sh@79 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:15:25.076 11:02:52 -- common/autotest_common.sh@81 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:15:25.076 11:02:52 -- common/autotest_common.sh@83 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:15:25.076 11:02:52 -- common/autotest_common.sh@85 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:15:25.076 11:02:52 -- common/autotest_common.sh@87 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:15:25.076 11:02:52 -- common/autotest_common.sh@89 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:15:25.076 11:02:52 -- common/autotest_common.sh@91 -- # : 1 00:15:25.076 11:02:52 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:15:25.076 11:02:52 -- common/autotest_common.sh@93 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:15:25.076 11:02:52 -- common/autotest_common.sh@95 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:25.076 11:02:52 -- common/autotest_common.sh@97 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:15:25.076 11:02:52 -- common/autotest_common.sh@99 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:15:25.076 11:02:52 -- common/autotest_common.sh@101 -- # : tcp 00:15:25.076 11:02:52 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:25.076 11:02:52 -- common/autotest_common.sh@103 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:15:25.076 11:02:52 -- common/autotest_common.sh@105 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:15:25.076 11:02:52 -- common/autotest_common.sh@107 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:15:25.076 11:02:52 -- common/autotest_common.sh@109 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:15:25.076 11:02:52 -- common/autotest_common.sh@111 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:15:25.076 11:02:52 -- common/autotest_common.sh@113 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:15:25.076 11:02:52 -- common/autotest_common.sh@115 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:15:25.076 11:02:52 -- common/autotest_common.sh@117 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:25.076 11:02:52 -- common/autotest_common.sh@119 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:15:25.076 11:02:52 -- common/autotest_common.sh@121 -- # : 1 00:15:25.076 11:02:52 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:15:25.076 11:02:52 -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:15:25.076 11:02:52 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:25.076 11:02:52 -- common/autotest_common.sh@125 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:15:25.076 11:02:52 -- common/autotest_common.sh@127 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:15:25.076 11:02:52 -- common/autotest_common.sh@129 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:15:25.076 11:02:52 -- common/autotest_common.sh@131 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:15:25.076 11:02:52 -- common/autotest_common.sh@133 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:15:25.076 11:02:52 -- common/autotest_common.sh@135 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:15:25.076 11:02:52 -- common/autotest_common.sh@137 -- # : v23.11 00:15:25.076 11:02:52 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:15:25.076 11:02:52 -- common/autotest_common.sh@139 -- # : true 00:15:25.076 11:02:52 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:15:25.076 11:02:52 -- common/autotest_common.sh@141 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:15:25.076 11:02:52 -- common/autotest_common.sh@143 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:15:25.076 11:02:52 -- common/autotest_common.sh@145 -- # : 1 00:15:25.076 11:02:52 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:15:25.076 11:02:52 -- common/autotest_common.sh@147 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:15:25.076 11:02:52 -- common/autotest_common.sh@149 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:15:25.076 11:02:52 -- common/autotest_common.sh@151 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:15:25.076 11:02:52 -- common/autotest_common.sh@153 -- # : 00:15:25.076 11:02:52 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:15:25.076 11:02:52 -- common/autotest_common.sh@155 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:15:25.076 11:02:52 -- common/autotest_common.sh@157 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:15:25.076 11:02:52 -- common/autotest_common.sh@159 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:15:25.076 11:02:52 -- common/autotest_common.sh@161 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:15:25.076 11:02:52 -- common/autotest_common.sh@163 -- # : 0 00:15:25.076 11:02:52 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:15:25.076 11:02:52 -- common/autotest_common.sh@166 -- # : 00:15:25.076 11:02:52 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:15:25.076 11:02:52 -- common/autotest_common.sh@168 -- # : 1 00:15:25.076 11:02:52 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:15:25.076 11:02:52 -- common/autotest_common.sh@170 -- # : 1 00:15:25.076 11:02:52 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:25.076 11:02:52 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:25.076 11:02:52 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:25.076 11:02:52 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:25.076 11:02:52 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:15:25.076 11:02:52 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:25.076 11:02:52 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:25.076 11:02:52 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:25.077 11:02:52 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:25.077 11:02:52 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:25.077 11:02:52 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:25.077 11:02:52 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:25.077 11:02:52 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:25.077 11:02:52 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:25.077 11:02:52 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:15:25.077 11:02:52 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:25.077 11:02:52 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:25.077 11:02:52 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:25.077 11:02:52 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:25.077 11:02:52 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:25.077 11:02:52 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:15:25.077 11:02:52 -- common/autotest_common.sh@199 -- # cat 00:15:25.077 11:02:52 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:15:25.077 11:02:52 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:25.077 11:02:52 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:25.077 11:02:52 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:25.077 11:02:52 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:25.077 11:02:52 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:15:25.077 11:02:52 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:15:25.077 11:02:52 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:25.077 11:02:52 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:25.077 11:02:52 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:25.077 11:02:52 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:25.077 11:02:52 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:25.077 11:02:52 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:25.077 11:02:52 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:25.077 11:02:52 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:25.077 11:02:52 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:25.077 11:02:52 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:25.077 11:02:52 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:25.077 11:02:52 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:25.077 11:02:52 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:15:25.077 11:02:52 -- common/autotest_common.sh@252 -- # export valgrind= 00:15:25.077 11:02:52 -- common/autotest_common.sh@252 -- # valgrind= 00:15:25.077 11:02:52 -- common/autotest_common.sh@258 -- # uname -s 00:15:25.077 11:02:52 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:15:25.077 11:02:52 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:15:25.077 11:02:52 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:15:25.077 11:02:52 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:15:25.077 11:02:52 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:15:25.077 11:02:52 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:15:25.077 11:02:52 -- common/autotest_common.sh@268 -- # MAKE=make 00:15:25.077 11:02:52 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:15:25.077 11:02:52 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:15:25.077 11:02:52 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:15:25.077 11:02:52 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:15:25.077 11:02:52 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:15:25.077 11:02:52 -- common/autotest_common.sh@289 -- # for i in "$@" 00:15:25.077 11:02:52 -- common/autotest_common.sh@290 -- # case "$i" in 00:15:25.077 11:02:52 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:15:25.077 11:02:52 -- common/autotest_common.sh@307 -- # [[ -z 78144 ]] 00:15:25.077 11:02:52 -- common/autotest_common.sh@307 -- # kill -0 78144 00:15:25.077 11:02:52 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:15:25.077 11:02:52 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:15:25.077 11:02:52 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:15:25.077 11:02:52 -- common/autotest_common.sh@320 -- # local mount target_dir 00:15:25.077 11:02:52 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:15:25.077 11:02:52 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:15:25.077 11:02:52 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:15:25.077 11:02:52 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:15:25.077 11:02:52 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.jITdUr 00:15:25.077 11:02:52 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:25.077 11:02:52 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:15:25.077 11:02:52 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:15:25.077 11:02:52 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.jITdUr/tests/target /tmp/spdk.jITdUr 00:15:25.077 11:02:52 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@316 -- # df -T 00:15:25.077 11:02:52 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=6266609664 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267887616 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=12012851200 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=5955940352 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=12012851200 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=5955940352 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267748352 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=143360 00:15:25.077 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:15:25.077 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:15:25.077 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:15:25.078 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.078 11:02:52 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:15:25.078 11:02:52 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:15:25.078 11:02:52 -- common/autotest_common.sh@351 -- # avails["$mount"]=92792446976 00:15:25.078 11:02:52 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:15:25.078 11:02:52 -- common/autotest_common.sh@352 -- # uses["$mount"]=6910332928 00:15:25.078 11:02:52 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:25.078 11:02:52 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:15:25.078 * Looking for test storage... 00:15:25.078 11:02:52 -- common/autotest_common.sh@357 -- # local target_space new_size 00:15:25.078 11:02:52 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:15:25.078 11:02:52 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.078 11:02:52 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:25.078 11:02:52 -- common/autotest_common.sh@361 -- # mount=/home 00:15:25.078 11:02:52 -- common/autotest_common.sh@363 -- # target_space=12012851200 00:15:25.078 11:02:52 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:15:25.078 11:02:52 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:15:25.078 11:02:52 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:15:25.078 11:02:52 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:15:25.078 11:02:52 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:15:25.078 11:02:52 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.078 11:02:52 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.078 11:02:52 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.078 11:02:52 -- common/autotest_common.sh@378 -- # return 0 00:15:25.078 11:02:52 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:15:25.078 11:02:52 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:15:25.078 11:02:52 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:25.078 11:02:52 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:25.078 11:02:52 -- common/autotest_common.sh@1673 -- # true 00:15:25.078 11:02:52 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:15:25.078 11:02:52 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:15:25.078 11:02:52 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:15:25.078 11:02:52 -- common/autotest_common.sh@27 -- # exec 00:15:25.078 11:02:52 -- common/autotest_common.sh@29 -- # exec 00:15:25.078 11:02:52 -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:25.078 11:02:52 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:25.078 11:02:52 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:25.078 11:02:52 -- common/autotest_common.sh@18 -- # set -x 00:15:25.078 11:02:52 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.078 11:02:52 -- nvmf/common.sh@7 -- # uname -s 00:15:25.078 11:02:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.078 11:02:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.078 11:02:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.078 11:02:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.078 11:02:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.078 11:02:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.078 11:02:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.078 11:02:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.078 11:02:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.078 11:02:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.078 11:02:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:25.078 11:02:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:25.078 11:02:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.078 11:02:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.078 11:02:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.078 11:02:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.078 11:02:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.078 11:02:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.078 11:02:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.078 11:02:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.078 11:02:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.078 11:02:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.078 11:02:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.078 11:02:52 -- paths/export.sh@5 -- # export PATH 00:15:25.078 11:02:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.078 11:02:52 -- nvmf/common.sh@47 -- # : 0 00:15:25.078 11:02:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.078 11:02:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.078 11:02:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.078 11:02:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.078 11:02:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.078 11:02:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.078 11:02:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.078 11:02:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.078 11:02:52 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:25.078 11:02:52 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:25.078 11:02:52 -- target/filesystem.sh@15 -- # nvmftestinit 00:15:25.078 11:02:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:25.078 11:02:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.078 11:02:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:25.078 11:02:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:25.078 11:02:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:25.078 11:02:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.078 11:02:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.078 11:02:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.078 11:02:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:25.078 11:02:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:25.078 11:02:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:25.078 11:02:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:25.078 11:02:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:25.078 11:02:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:25.078 11:02:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.078 11:02:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.078 11:02:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.078 11:02:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:25.078 11:02:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.078 11:02:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.078 11:02:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.078 11:02:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.078 11:02:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.078 11:02:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.078 11:02:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.078 11:02:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.078 11:02:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:25.078 11:02:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:25.078 Cannot find device "nvmf_tgt_br" 00:15:25.078 11:02:52 -- nvmf/common.sh@155 -- # true 00:15:25.078 11:02:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.078 Cannot find device "nvmf_tgt_br2" 00:15:25.078 11:02:52 -- nvmf/common.sh@156 -- # true 00:15:25.078 11:02:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:25.078 11:02:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:25.078 Cannot find device "nvmf_tgt_br" 00:15:25.078 11:02:52 -- nvmf/common.sh@158 -- # true 00:15:25.078 11:02:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:25.078 Cannot find device "nvmf_tgt_br2" 00:15:25.078 11:02:52 -- nvmf/common.sh@159 -- # true 00:15:25.079 11:02:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:25.079 11:02:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:25.079 11:02:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.079 11:02:52 -- nvmf/common.sh@162 -- # true 00:15:25.079 11:02:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.079 11:02:52 -- nvmf/common.sh@163 -- # true 00:15:25.079 11:02:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.079 11:02:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.079 11:02:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.079 11:02:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.079 11:02:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.079 11:02:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.079 11:02:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.079 11:02:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.079 11:02:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.079 11:02:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:25.079 11:02:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:25.079 11:02:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:25.079 11:02:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:25.079 11:02:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.079 11:02:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.079 11:02:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.079 11:02:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:25.079 11:02:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:25.079 11:02:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.079 11:02:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.079 11:02:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.079 11:02:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.079 11:02:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.079 11:02:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:25.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:15:25.079 00:15:25.079 --- 10.0.0.2 ping statistics --- 00:15:25.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.079 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:15:25.079 11:02:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:25.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:25.079 00:15:25.079 --- 10.0.0.3 ping statistics --- 00:15:25.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.079 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:25.079 11:02:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:25.079 00:15:25.079 --- 10.0.0.1 ping statistics --- 00:15:25.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.079 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:25.079 11:02:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.079 11:02:52 -- nvmf/common.sh@422 -- # return 0 00:15:25.079 11:02:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:25.079 11:02:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.079 11:02:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:25.079 11:02:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:25.079 11:02:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.079 11:02:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:25.079 11:02:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:25.079 11:02:52 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:25.079 11:02:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.079 11:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.079 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:15:25.079 ************************************ 00:15:25.079 START TEST nvmf_filesystem_no_in_capsule 00:15:25.079 ************************************ 00:15:25.079 11:02:52 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:15:25.079 11:02:52 -- target/filesystem.sh@47 -- # in_capsule=0 00:15:25.079 11:02:52 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:25.079 11:02:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:25.079 11:02:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:25.079 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:15:25.079 11:02:52 -- nvmf/common.sh@470 -- # nvmfpid=78314 00:15:25.079 11:02:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.079 11:02:52 -- nvmf/common.sh@471 -- # waitforlisten 78314 00:15:25.079 11:02:52 -- common/autotest_common.sh@817 -- # '[' -z 78314 ']' 00:15:25.079 11:02:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.079 11:02:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:25.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.079 11:02:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.079 11:02:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:25.079 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:15:25.079 [2024-04-18 11:02:52.984493] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:25.079 [2024-04-18 11:02:52.984583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.079 [2024-04-18 11:02:53.120522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.079 [2024-04-18 11:02:53.220924] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.079 [2024-04-18 11:02:53.220992] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.079 [2024-04-18 11:02:53.221004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.079 [2024-04-18 11:02:53.221013] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.079 [2024-04-18 11:02:53.221020] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.079 [2024-04-18 11:02:53.221187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.079 [2024-04-18 11:02:53.221401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.079 [2024-04-18 11:02:53.221952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.079 [2024-04-18 11:02:53.221983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.338 11:02:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:25.338 11:02:53 -- common/autotest_common.sh@850 -- # return 0 00:15:25.338 11:02:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:25.338 11:02:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:25.338 11:02:53 -- common/autotest_common.sh@10 -- # set +x 00:15:25.338 11:02:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.338 11:02:53 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:25.338 11:02:53 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:25.338 11:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.338 11:02:53 -- common/autotest_common.sh@10 -- # set +x 00:15:25.338 [2024-04-18 11:02:53.953884] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.338 11:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.338 11:02:53 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:25.338 11:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.338 11:02:53 -- common/autotest_common.sh@10 -- # set +x 00:15:25.596 Malloc1 00:15:25.596 11:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.596 11:02:54 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:25.596 11:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.596 11:02:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.596 11:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.596 11:02:54 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.596 11:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.596 11:02:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.596 11:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.596 11:02:54 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.596 11:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.596 11:02:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.596 [2024-04-18 11:02:54.147707] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.596 11:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.596 11:02:54 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:25.596 11:02:54 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:15:25.596 11:02:54 -- common/autotest_common.sh@1365 -- # local bdev_info 00:15:25.596 11:02:54 -- common/autotest_common.sh@1366 -- # local bs 00:15:25.596 11:02:54 -- common/autotest_common.sh@1367 -- # local nb 00:15:25.596 11:02:54 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:25.596 11:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.596 11:02:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.596 11:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.596 11:02:54 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:15:25.596 { 00:15:25.596 "aliases": [ 00:15:25.596 "03aad8e1-9ff0-4b22-ac02-fc08219492e5" 00:15:25.596 ], 00:15:25.596 "assigned_rate_limits": { 00:15:25.596 "r_mbytes_per_sec": 0, 00:15:25.596 "rw_ios_per_sec": 0, 00:15:25.596 "rw_mbytes_per_sec": 0, 00:15:25.596 "w_mbytes_per_sec": 0 00:15:25.596 }, 00:15:25.596 "block_size": 512, 00:15:25.596 "claim_type": "exclusive_write", 00:15:25.596 "claimed": true, 00:15:25.596 "driver_specific": {}, 00:15:25.596 "memory_domains": [ 00:15:25.596 { 00:15:25.596 "dma_device_id": "system", 00:15:25.596 "dma_device_type": 1 00:15:25.596 }, 00:15:25.596 { 00:15:25.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.596 "dma_device_type": 2 00:15:25.596 } 00:15:25.596 ], 00:15:25.596 "name": "Malloc1", 00:15:25.596 "num_blocks": 1048576, 00:15:25.596 "product_name": "Malloc disk", 00:15:25.596 "supported_io_types": { 00:15:25.596 "abort": true, 00:15:25.596 "compare": false, 00:15:25.596 "compare_and_write": false, 00:15:25.596 "flush": true, 00:15:25.596 "nvme_admin": false, 00:15:25.596 "nvme_io": false, 00:15:25.596 "read": true, 00:15:25.596 "reset": true, 00:15:25.596 "unmap": true, 00:15:25.596 "write": true, 00:15:25.596 "write_zeroes": true 00:15:25.596 }, 00:15:25.596 "uuid": "03aad8e1-9ff0-4b22-ac02-fc08219492e5", 00:15:25.596 "zoned": false 00:15:25.596 } 00:15:25.596 ]' 00:15:25.596 11:02:54 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:15:25.596 11:02:54 -- common/autotest_common.sh@1369 -- # bs=512 00:15:25.596 11:02:54 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:15:25.854 11:02:54 -- common/autotest_common.sh@1370 -- # nb=1048576 00:15:25.854 11:02:54 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:15:25.854 11:02:54 -- common/autotest_common.sh@1374 -- # echo 512 00:15:25.854 11:02:54 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:25.854 11:02:54 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.854 11:02:54 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.854 11:02:54 -- common/autotest_common.sh@1184 -- # local i=0 00:15:25.854 11:02:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.854 11:02:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:25.854 11:02:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:28.384 11:02:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:28.384 11:02:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:28.384 11:02:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.384 11:02:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:28.384 11:02:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.384 11:02:56 -- common/autotest_common.sh@1194 -- # return 0 00:15:28.384 11:02:56 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:28.384 11:02:56 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:28.384 11:02:56 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:28.384 11:02:56 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:28.384 11:02:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:28.384 11:02:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:28.384 11:02:56 -- setup/common.sh@80 -- # echo 536870912 00:15:28.384 11:02:56 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:28.384 11:02:56 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:28.385 11:02:56 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:28.385 11:02:56 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:28.385 11:02:56 -- target/filesystem.sh@69 -- # partprobe 00:15:28.385 11:02:56 -- target/filesystem.sh@70 -- # sleep 1 00:15:29.337 11:02:57 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:29.337 11:02:57 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:29.337 11:02:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:29.337 11:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.337 11:02:57 -- common/autotest_common.sh@10 -- # set +x 00:15:29.337 ************************************ 00:15:29.337 START TEST filesystem_ext4 00:15:29.337 ************************************ 00:15:29.337 11:02:57 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:29.337 11:02:57 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:29.337 11:02:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:29.337 11:02:57 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:29.337 11:02:57 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:15:29.337 11:02:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:29.337 11:02:57 -- common/autotest_common.sh@914 -- # local i=0 00:15:29.337 11:02:57 -- common/autotest_common.sh@915 -- # local force 00:15:29.337 11:02:57 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:15:29.337 11:02:57 -- common/autotest_common.sh@918 -- # force=-F 00:15:29.337 11:02:57 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:29.337 mke2fs 1.46.5 (30-Dec-2021) 00:15:29.337 Discarding device blocks: 0/522240 done 00:15:29.338 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:29.338 Filesystem UUID: 0feac764-a260-4486-a287-42fa902ffa0b 00:15:29.338 Superblock backups stored on blocks: 00:15:29.338 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:29.338 00:15:29.338 Allocating group tables: 0/64 done 00:15:29.338 Writing inode tables: 0/64 done 00:15:29.338 Creating journal (8192 blocks): done 00:15:29.338 Writing superblocks and filesystem accounting information: 0/64 done 00:15:29.338 00:15:29.338 11:02:57 -- common/autotest_common.sh@931 -- # return 0 00:15:29.338 11:02:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:29.338 11:02:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:29.597 11:02:57 -- target/filesystem.sh@25 -- # sync 00:15:29.597 11:02:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:29.597 11:02:58 -- target/filesystem.sh@27 -- # sync 00:15:29.597 11:02:58 -- target/filesystem.sh@29 -- # i=0 00:15:29.597 11:02:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:29.597 11:02:58 -- target/filesystem.sh@37 -- # kill -0 78314 00:15:29.597 11:02:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:29.597 11:02:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:29.597 11:02:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:29.597 11:02:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:29.597 ************************************ 00:15:29.597 END TEST filesystem_ext4 00:15:29.597 ************************************ 00:15:29.597 00:15:29.597 real 0m0.361s 00:15:29.597 user 0m0.017s 00:15:29.597 sys 0m0.054s 00:15:29.597 11:02:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.597 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:15:29.597 11:02:58 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:29.597 11:02:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:29.597 11:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.597 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:15:29.597 ************************************ 00:15:29.597 START TEST filesystem_btrfs 00:15:29.597 ************************************ 00:15:29.597 11:02:58 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:29.597 11:02:58 -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:29.597 11:02:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:29.597 11:02:58 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:29.597 11:02:58 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:15:29.597 11:02:58 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:29.597 11:02:58 -- common/autotest_common.sh@914 -- # local i=0 00:15:29.597 11:02:58 -- common/autotest_common.sh@915 -- # local force 00:15:29.597 11:02:58 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:15:29.597 11:02:58 -- common/autotest_common.sh@920 -- # force=-f 00:15:29.597 11:02:58 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:29.856 btrfs-progs v6.6.2 00:15:29.856 See https://btrfs.readthedocs.io for more information. 00:15:29.856 00:15:29.856 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:29.856 NOTE: several default settings have changed in version 5.15, please make sure 00:15:29.856 this does not affect your deployments: 00:15:29.856 - DUP for metadata (-m dup) 00:15:29.856 - enabled no-holes (-O no-holes) 00:15:29.856 - enabled free-space-tree (-R free-space-tree) 00:15:29.856 00:15:29.856 Label: (null) 00:15:29.856 UUID: 99d65101-5f0d-4d2f-89fd-1b2e925872d6 00:15:29.856 Node size: 16384 00:15:29.856 Sector size: 4096 00:15:29.856 Filesystem size: 510.00MiB 00:15:29.856 Block group profiles: 00:15:29.856 Data: single 8.00MiB 00:15:29.856 Metadata: DUP 32.00MiB 00:15:29.856 System: DUP 8.00MiB 00:15:29.856 SSD detected: yes 00:15:29.856 Zoned device: no 00:15:29.856 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:15:29.856 Runtime features: free-space-tree 00:15:29.856 Checksum: crc32c 00:15:29.856 Number of devices: 1 00:15:29.856 Devices: 00:15:29.856 ID SIZE PATH 00:15:29.856 1 510.00MiB /dev/nvme0n1p1 00:15:29.856 00:15:29.857 11:02:58 -- common/autotest_common.sh@931 -- # return 0 00:15:29.857 11:02:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:29.857 11:02:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:29.857 11:02:58 -- target/filesystem.sh@25 -- # sync 00:15:29.857 11:02:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:29.857 11:02:58 -- target/filesystem.sh@27 -- # sync 00:15:29.857 11:02:58 -- target/filesystem.sh@29 -- # i=0 00:15:29.857 11:02:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:29.857 11:02:58 -- target/filesystem.sh@37 -- # kill -0 78314 00:15:29.857 11:02:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:29.857 11:02:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:29.857 11:02:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:29.857 11:02:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:29.857 00:15:29.857 real 0m0.222s 00:15:29.857 user 0m0.021s 00:15:29.857 sys 0m0.060s 00:15:29.857 ************************************ 00:15:29.857 END TEST filesystem_btrfs 00:15:29.857 ************************************ 00:15:29.857 11:02:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.857 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:15:29.857 11:02:58 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:29.857 11:02:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:29.857 11:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.857 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:15:30.116 ************************************ 00:15:30.116 START TEST filesystem_xfs 00:15:30.116 ************************************ 00:15:30.116 11:02:58 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:15:30.116 11:02:58 -- target/filesystem.sh@18 -- # fstype=xfs 00:15:30.116 11:02:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:30.116 11:02:58 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:30.116 11:02:58 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:15:30.116 11:02:58 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:30.116 11:02:58 -- common/autotest_common.sh@914 -- # local i=0 00:15:30.116 11:02:58 -- common/autotest_common.sh@915 -- # local force 00:15:30.116 11:02:58 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:15:30.116 11:02:58 -- common/autotest_common.sh@920 -- # force=-f 00:15:30.116 11:02:58 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:30.116 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:30.116 = sectsz=512 attr=2, projid32bit=1 00:15:30.116 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:30.116 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:30.116 data = bsize=4096 blocks=130560, imaxpct=25 00:15:30.116 = sunit=0 swidth=0 blks 00:15:30.116 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:30.116 log =internal log bsize=4096 blocks=16384, version=2 00:15:30.116 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:30.116 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:30.683 Discarding blocks...Done. 00:15:30.683 11:02:59 -- common/autotest_common.sh@931 -- # return 0 00:15:30.683 11:02:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:33.215 11:03:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:33.215 11:03:01 -- target/filesystem.sh@25 -- # sync 00:15:33.215 11:03:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:33.215 11:03:01 -- target/filesystem.sh@27 -- # sync 00:15:33.215 11:03:01 -- target/filesystem.sh@29 -- # i=0 00:15:33.215 11:03:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:33.215 11:03:01 -- target/filesystem.sh@37 -- # kill -0 78314 00:15:33.215 11:03:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:33.215 11:03:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:33.215 11:03:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:33.215 11:03:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:33.215 ************************************ 00:15:33.215 END TEST filesystem_xfs 00:15:33.215 ************************************ 00:15:33.215 00:15:33.215 real 0m3.164s 00:15:33.215 user 0m0.016s 00:15:33.215 sys 0m0.060s 00:15:33.215 11:03:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:33.215 11:03:01 -- common/autotest_common.sh@10 -- # set +x 00:15:33.215 11:03:01 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:33.215 11:03:01 -- target/filesystem.sh@93 -- # sync 00:15:33.215 11:03:01 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.215 11:03:01 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.215 11:03:01 -- common/autotest_common.sh@1205 -- # local i=0 00:15:33.215 11:03:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:33.215 11:03:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.215 11:03:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:33.215 11:03:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.215 11:03:01 -- common/autotest_common.sh@1217 -- # return 0 00:15:33.215 11:03:01 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.215 11:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.215 11:03:01 -- common/autotest_common.sh@10 -- # set +x 00:15:33.215 11:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.215 11:03:01 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:33.215 11:03:01 -- target/filesystem.sh@101 -- # killprocess 78314 00:15:33.215 11:03:01 -- common/autotest_common.sh@936 -- # '[' -z 78314 ']' 00:15:33.215 11:03:01 -- common/autotest_common.sh@940 -- # kill -0 78314 00:15:33.215 11:03:01 -- common/autotest_common.sh@941 -- # uname 00:15:33.215 11:03:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.215 11:03:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78314 00:15:33.215 killing process with pid 78314 00:15:33.215 11:03:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:33.215 11:03:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:33.215 11:03:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78314' 00:15:33.215 11:03:01 -- common/autotest_common.sh@955 -- # kill 78314 00:15:33.215 11:03:01 -- common/autotest_common.sh@960 -- # wait 78314 00:15:33.782 11:03:02 -- target/filesystem.sh@102 -- # nvmfpid= 00:15:33.782 00:15:33.782 real 0m9.314s 00:15:33.782 user 0m35.256s 00:15:33.782 sys 0m1.701s 00:15:33.782 11:03:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:33.782 11:03:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.782 ************************************ 00:15:33.782 END TEST nvmf_filesystem_no_in_capsule 00:15:33.782 ************************************ 00:15:33.782 11:03:02 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:33.782 11:03:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:33.782 11:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:33.782 11:03:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.782 ************************************ 00:15:33.782 START TEST nvmf_filesystem_in_capsule 00:15:33.782 ************************************ 00:15:33.782 11:03:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:15:33.782 11:03:02 -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:33.782 11:03:02 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:33.782 11:03:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:33.782 11:03:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:33.782 11:03:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.782 11:03:02 -- nvmf/common.sh@470 -- # nvmfpid=78639 00:15:33.782 11:03:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.782 11:03:02 -- nvmf/common.sh@471 -- # waitforlisten 78639 00:15:33.782 11:03:02 -- common/autotest_common.sh@817 -- # '[' -z 78639 ']' 00:15:33.782 11:03:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.782 11:03:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:33.782 11:03:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.782 11:03:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:33.782 11:03:02 -- common/autotest_common.sh@10 -- # set +x 00:15:34.042 [2024-04-18 11:03:02.425557] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:34.042 [2024-04-18 11:03:02.425639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.042 [2024-04-18 11:03:02.566688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.042 [2024-04-18 11:03:02.666628] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.042 [2024-04-18 11:03:02.666946] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.042 [2024-04-18 11:03:02.667114] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.042 [2024-04-18 11:03:02.667333] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.042 [2024-04-18 11:03:02.667368] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.042 [2024-04-18 11:03:02.667527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.042 [2024-04-18 11:03:02.667656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.042 [2024-04-18 11:03:02.668292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.042 [2024-04-18 11:03:02.668297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.977 11:03:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:34.977 11:03:03 -- common/autotest_common.sh@850 -- # return 0 00:15:34.977 11:03:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:34.977 11:03:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:34.977 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 11:03:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.977 11:03:03 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:34.977 11:03:03 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:34.977 11:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.977 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 [2024-04-18 11:03:03.375496] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.977 11:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.977 11:03:03 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:34.977 11:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.977 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 Malloc1 00:15:34.977 11:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.977 11:03:03 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:34.977 11:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.977 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 11:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.977 11:03:03 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.977 11:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.977 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 11:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.977 11:03:03 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.977 11:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.977 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 [2024-04-18 11:03:03.563210] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.977 11:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.977 11:03:03 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:34.977 11:03:03 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:15:34.977 11:03:03 -- common/autotest_common.sh@1365 -- # local bdev_info 00:15:34.977 11:03:03 -- common/autotest_common.sh@1366 -- # local bs 00:15:34.977 11:03:03 -- common/autotest_common.sh@1367 -- # local nb 00:15:34.977 11:03:03 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:34.977 11:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.977 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:15:34.977 11:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.977 11:03:03 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:15:34.977 { 00:15:34.977 "aliases": [ 00:15:34.977 "b8809b6b-2aba-40c2-acf2-a75a9e7b208d" 00:15:34.977 ], 00:15:34.977 "assigned_rate_limits": { 00:15:34.977 "r_mbytes_per_sec": 0, 00:15:34.977 "rw_ios_per_sec": 0, 00:15:34.977 "rw_mbytes_per_sec": 0, 00:15:34.977 "w_mbytes_per_sec": 0 00:15:34.977 }, 00:15:34.977 "block_size": 512, 00:15:34.977 "claim_type": "exclusive_write", 00:15:34.977 "claimed": true, 00:15:34.977 "driver_specific": {}, 00:15:34.977 "memory_domains": [ 00:15:34.977 { 00:15:34.977 "dma_device_id": "system", 00:15:34.977 "dma_device_type": 1 00:15:34.977 }, 00:15:34.977 { 00:15:34.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.977 "dma_device_type": 2 00:15:34.977 } 00:15:34.977 ], 00:15:34.977 "name": "Malloc1", 00:15:34.977 "num_blocks": 1048576, 00:15:34.977 "product_name": "Malloc disk", 00:15:34.977 "supported_io_types": { 00:15:34.977 "abort": true, 00:15:34.977 "compare": false, 00:15:34.977 "compare_and_write": false, 00:15:34.977 "flush": true, 00:15:34.977 "nvme_admin": false, 00:15:34.977 "nvme_io": false, 00:15:34.977 "read": true, 00:15:34.977 "reset": true, 00:15:34.977 "unmap": true, 00:15:34.977 "write": true, 00:15:34.977 "write_zeroes": true 00:15:34.977 }, 00:15:34.977 "uuid": "b8809b6b-2aba-40c2-acf2-a75a9e7b208d", 00:15:34.977 "zoned": false 00:15:34.977 } 00:15:34.977 ]' 00:15:34.977 11:03:03 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:15:35.235 11:03:03 -- common/autotest_common.sh@1369 -- # bs=512 00:15:35.235 11:03:03 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:15:35.235 11:03:03 -- common/autotest_common.sh@1370 -- # nb=1048576 00:15:35.235 11:03:03 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:15:35.235 11:03:03 -- common/autotest_common.sh@1374 -- # echo 512 00:15:35.235 11:03:03 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:35.235 11:03:03 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:35.235 11:03:03 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.235 11:03:03 -- common/autotest_common.sh@1184 -- # local i=0 00:15:35.235 11:03:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.235 11:03:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:35.235 11:03:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:37.763 11:03:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:37.763 11:03:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:37.763 11:03:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.763 11:03:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:37.763 11:03:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.763 11:03:05 -- common/autotest_common.sh@1194 -- # return 0 00:15:37.763 11:03:05 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:37.763 11:03:05 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:37.763 11:03:05 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:37.763 11:03:05 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:37.763 11:03:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:37.763 11:03:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:37.763 11:03:05 -- setup/common.sh@80 -- # echo 536870912 00:15:37.763 11:03:05 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:37.763 11:03:05 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:37.763 11:03:05 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:37.763 11:03:05 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:37.763 11:03:05 -- target/filesystem.sh@69 -- # partprobe 00:15:37.763 11:03:05 -- target/filesystem.sh@70 -- # sleep 1 00:15:38.699 11:03:06 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:38.699 11:03:06 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:38.699 11:03:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:38.699 11:03:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.699 11:03:06 -- common/autotest_common.sh@10 -- # set +x 00:15:38.699 ************************************ 00:15:38.699 START TEST filesystem_in_capsule_ext4 00:15:38.699 ************************************ 00:15:38.699 11:03:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:38.699 11:03:07 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:38.699 11:03:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:38.699 11:03:07 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:38.699 11:03:07 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:15:38.699 11:03:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:38.699 11:03:07 -- common/autotest_common.sh@914 -- # local i=0 00:15:38.699 11:03:07 -- common/autotest_common.sh@915 -- # local force 00:15:38.699 11:03:07 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:15:38.699 11:03:07 -- common/autotest_common.sh@918 -- # force=-F 00:15:38.699 11:03:07 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:38.699 mke2fs 1.46.5 (30-Dec-2021) 00:15:38.699 Discarding device blocks: 0/522240 done 00:15:38.699 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:38.699 Filesystem UUID: 7fe95415-f63f-4a0f-a626-19a52179eeb1 00:15:38.699 Superblock backups stored on blocks: 00:15:38.699 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:38.699 00:15:38.699 Allocating group tables: 0/64 done 00:15:38.699 Writing inode tables: 0/64 done 00:15:38.699 Creating journal (8192 blocks): done 00:15:38.699 Writing superblocks and filesystem accounting information: 0/64 done 00:15:38.699 00:15:38.699 11:03:07 -- common/autotest_common.sh@931 -- # return 0 00:15:38.699 11:03:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:38.699 11:03:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:38.699 11:03:07 -- target/filesystem.sh@25 -- # sync 00:15:38.958 11:03:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:38.958 11:03:07 -- target/filesystem.sh@27 -- # sync 00:15:38.958 11:03:07 -- target/filesystem.sh@29 -- # i=0 00:15:38.958 11:03:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:38.958 11:03:07 -- target/filesystem.sh@37 -- # kill -0 78639 00:15:38.958 11:03:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:38.958 11:03:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:38.958 11:03:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:38.958 11:03:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:38.958 ************************************ 00:15:38.958 END TEST filesystem_in_capsule_ext4 00:15:38.958 ************************************ 00:15:38.958 00:15:38.958 real 0m0.369s 00:15:38.958 user 0m0.027s 00:15:38.958 sys 0m0.054s 00:15:38.958 11:03:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.958 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:15:38.958 11:03:07 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:38.958 11:03:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:38.958 11:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.958 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:15:38.958 ************************************ 00:15:38.958 START TEST filesystem_in_capsule_btrfs 00:15:38.958 ************************************ 00:15:38.958 11:03:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:38.958 11:03:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:38.958 11:03:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:38.958 11:03:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:38.958 11:03:07 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:15:38.958 11:03:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:38.958 11:03:07 -- common/autotest_common.sh@914 -- # local i=0 00:15:38.958 11:03:07 -- common/autotest_common.sh@915 -- # local force 00:15:38.958 11:03:07 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:15:38.958 11:03:07 -- common/autotest_common.sh@920 -- # force=-f 00:15:38.958 11:03:07 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:39.216 btrfs-progs v6.6.2 00:15:39.216 See https://btrfs.readthedocs.io for more information. 00:15:39.216 00:15:39.216 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:39.216 NOTE: several default settings have changed in version 5.15, please make sure 00:15:39.216 this does not affect your deployments: 00:15:39.216 - DUP for metadata (-m dup) 00:15:39.216 - enabled no-holes (-O no-holes) 00:15:39.216 - enabled free-space-tree (-R free-space-tree) 00:15:39.216 00:15:39.216 Label: (null) 00:15:39.216 UUID: 1ad549f2-77a9-4ea0-bdf6-9fbb3cac75c5 00:15:39.216 Node size: 16384 00:15:39.216 Sector size: 4096 00:15:39.216 Filesystem size: 510.00MiB 00:15:39.216 Block group profiles: 00:15:39.216 Data: single 8.00MiB 00:15:39.216 Metadata: DUP 32.00MiB 00:15:39.216 System: DUP 8.00MiB 00:15:39.216 SSD detected: yes 00:15:39.216 Zoned device: no 00:15:39.216 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:15:39.216 Runtime features: free-space-tree 00:15:39.216 Checksum: crc32c 00:15:39.216 Number of devices: 1 00:15:39.216 Devices: 00:15:39.216 ID SIZE PATH 00:15:39.216 1 510.00MiB /dev/nvme0n1p1 00:15:39.216 00:15:39.216 11:03:07 -- common/autotest_common.sh@931 -- # return 0 00:15:39.216 11:03:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:39.216 11:03:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:39.216 11:03:07 -- target/filesystem.sh@25 -- # sync 00:15:39.216 11:03:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:39.216 11:03:07 -- target/filesystem.sh@27 -- # sync 00:15:39.216 11:03:07 -- target/filesystem.sh@29 -- # i=0 00:15:39.216 11:03:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:39.216 11:03:07 -- target/filesystem.sh@37 -- # kill -0 78639 00:15:39.216 11:03:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:39.216 11:03:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:39.216 11:03:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:39.216 11:03:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:39.216 ************************************ 00:15:39.216 END TEST filesystem_in_capsule_btrfs 00:15:39.216 ************************************ 00:15:39.216 00:15:39.216 real 0m0.227s 00:15:39.216 user 0m0.016s 00:15:39.216 sys 0m0.070s 00:15:39.216 11:03:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:39.216 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:15:39.216 11:03:07 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:39.216 11:03:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:39.216 11:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.216 11:03:07 -- common/autotest_common.sh@10 -- # set +x 00:15:39.475 ************************************ 00:15:39.475 START TEST filesystem_in_capsule_xfs 00:15:39.475 ************************************ 00:15:39.475 11:03:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:15:39.475 11:03:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:15:39.475 11:03:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:39.475 11:03:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:39.475 11:03:07 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:15:39.475 11:03:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:39.475 11:03:07 -- common/autotest_common.sh@914 -- # local i=0 00:15:39.475 11:03:07 -- common/autotest_common.sh@915 -- # local force 00:15:39.475 11:03:07 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:15:39.475 11:03:07 -- common/autotest_common.sh@920 -- # force=-f 00:15:39.475 11:03:07 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:39.475 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:39.475 = sectsz=512 attr=2, projid32bit=1 00:15:39.475 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:39.475 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:39.475 data = bsize=4096 blocks=130560, imaxpct=25 00:15:39.475 = sunit=0 swidth=0 blks 00:15:39.475 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:39.475 log =internal log bsize=4096 blocks=16384, version=2 00:15:39.475 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:39.475 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:40.041 Discarding blocks...Done. 00:15:40.041 11:03:08 -- common/autotest_common.sh@931 -- # return 0 00:15:40.041 11:03:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:41.942 11:03:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:41.942 11:03:10 -- target/filesystem.sh@25 -- # sync 00:15:41.942 11:03:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:41.942 11:03:10 -- target/filesystem.sh@27 -- # sync 00:15:41.942 11:03:10 -- target/filesystem.sh@29 -- # i=0 00:15:41.942 11:03:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:41.942 11:03:10 -- target/filesystem.sh@37 -- # kill -0 78639 00:15:41.942 11:03:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:41.942 11:03:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:41.942 11:03:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:41.942 11:03:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:41.942 ************************************ 00:15:41.942 END TEST filesystem_in_capsule_xfs 00:15:41.942 ************************************ 00:15:41.942 00:15:41.942 real 0m2.625s 00:15:41.942 user 0m0.028s 00:15:41.942 sys 0m0.047s 00:15:41.942 11:03:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.942 11:03:10 -- common/autotest_common.sh@10 -- # set +x 00:15:41.942 11:03:10 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:41.942 11:03:10 -- target/filesystem.sh@93 -- # sync 00:15:41.942 11:03:10 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.201 11:03:10 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:42.201 11:03:10 -- common/autotest_common.sh@1205 -- # local i=0 00:15:42.201 11:03:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:42.201 11:03:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.201 11:03:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:42.201 11:03:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.201 11:03:10 -- common/autotest_common.sh@1217 -- # return 0 00:15:42.201 11:03:10 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.201 11:03:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.201 11:03:10 -- common/autotest_common.sh@10 -- # set +x 00:15:42.201 11:03:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.201 11:03:10 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:42.201 11:03:10 -- target/filesystem.sh@101 -- # killprocess 78639 00:15:42.201 11:03:10 -- common/autotest_common.sh@936 -- # '[' -z 78639 ']' 00:15:42.201 11:03:10 -- common/autotest_common.sh@940 -- # kill -0 78639 00:15:42.201 11:03:10 -- common/autotest_common.sh@941 -- # uname 00:15:42.201 11:03:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:42.201 11:03:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78639 00:15:42.201 killing process with pid 78639 00:15:42.201 11:03:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:42.201 11:03:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:42.201 11:03:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78639' 00:15:42.201 11:03:10 -- common/autotest_common.sh@955 -- # kill 78639 00:15:42.201 11:03:10 -- common/autotest_common.sh@960 -- # wait 78639 00:15:42.767 ************************************ 00:15:42.767 END TEST nvmf_filesystem_in_capsule 00:15:42.767 ************************************ 00:15:42.767 11:03:11 -- target/filesystem.sh@102 -- # nvmfpid= 00:15:42.767 00:15:42.767 real 0m8.835s 00:15:42.767 user 0m33.434s 00:15:42.767 sys 0m1.610s 00:15:42.767 11:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:42.767 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:42.767 11:03:11 -- target/filesystem.sh@108 -- # nvmftestfini 00:15:42.767 11:03:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:42.767 11:03:11 -- nvmf/common.sh@117 -- # sync 00:15:42.767 11:03:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.767 11:03:11 -- nvmf/common.sh@120 -- # set +e 00:15:42.767 11:03:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.768 11:03:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.768 rmmod nvme_tcp 00:15:42.768 rmmod nvme_fabrics 00:15:42.768 rmmod nvme_keyring 00:15:42.768 11:03:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.768 11:03:11 -- nvmf/common.sh@124 -- # set -e 00:15:42.768 11:03:11 -- nvmf/common.sh@125 -- # return 0 00:15:42.768 11:03:11 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:42.768 11:03:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:42.768 11:03:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:42.768 11:03:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:42.768 11:03:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.768 11:03:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.768 11:03:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.768 11:03:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.768 11:03:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.768 11:03:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:42.768 00:15:42.768 real 0m19.104s 00:15:42.768 user 1m8.971s 00:15:42.768 sys 0m3.785s 00:15:42.768 11:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:42.768 ************************************ 00:15:42.768 END TEST nvmf_filesystem 00:15:42.768 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:42.768 ************************************ 00:15:43.026 11:03:11 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:43.026 11:03:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:43.026 11:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:43.026 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:43.026 ************************************ 00:15:43.026 START TEST nvmf_discovery 00:15:43.026 ************************************ 00:15:43.026 11:03:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:43.026 * Looking for test storage... 00:15:43.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:43.026 11:03:11 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.026 11:03:11 -- nvmf/common.sh@7 -- # uname -s 00:15:43.026 11:03:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.026 11:03:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.026 11:03:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.026 11:03:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.026 11:03:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.027 11:03:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.027 11:03:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.027 11:03:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.027 11:03:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.027 11:03:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.027 11:03:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:43.027 11:03:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:43.027 11:03:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.027 11:03:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.027 11:03:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.027 11:03:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.027 11:03:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.027 11:03:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.027 11:03:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.027 11:03:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.027 11:03:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.027 11:03:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.027 11:03:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.027 11:03:11 -- paths/export.sh@5 -- # export PATH 00:15:43.027 11:03:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.027 11:03:11 -- nvmf/common.sh@47 -- # : 0 00:15:43.027 11:03:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.027 11:03:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.027 11:03:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.027 11:03:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.027 11:03:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.027 11:03:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.027 11:03:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.027 11:03:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.027 11:03:11 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:43.027 11:03:11 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:43.027 11:03:11 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:43.027 11:03:11 -- target/discovery.sh@15 -- # hash nvme 00:15:43.027 11:03:11 -- target/discovery.sh@20 -- # nvmftestinit 00:15:43.027 11:03:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:43.027 11:03:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.027 11:03:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:43.027 11:03:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:43.027 11:03:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:43.027 11:03:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.027 11:03:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.027 11:03:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.027 11:03:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:43.027 11:03:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:43.027 11:03:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:43.027 11:03:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:43.027 11:03:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:43.027 11:03:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:43.027 11:03:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.027 11:03:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.027 11:03:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.027 11:03:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:43.027 11:03:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.027 11:03:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.027 11:03:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.027 11:03:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.027 11:03:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.027 11:03:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.027 11:03:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.027 11:03:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.027 11:03:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:43.027 11:03:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:43.027 Cannot find device "nvmf_tgt_br" 00:15:43.027 11:03:11 -- nvmf/common.sh@155 -- # true 00:15:43.027 11:03:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.027 Cannot find device "nvmf_tgt_br2" 00:15:43.027 11:03:11 -- nvmf/common.sh@156 -- # true 00:15:43.027 11:03:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:43.027 11:03:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:43.027 Cannot find device "nvmf_tgt_br" 00:15:43.027 11:03:11 -- nvmf/common.sh@158 -- # true 00:15:43.027 11:03:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:43.027 Cannot find device "nvmf_tgt_br2" 00:15:43.286 11:03:11 -- nvmf/common.sh@159 -- # true 00:15:43.286 11:03:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:43.286 11:03:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:43.286 11:03:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.286 11:03:11 -- nvmf/common.sh@162 -- # true 00:15:43.286 11:03:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.286 11:03:11 -- nvmf/common.sh@163 -- # true 00:15:43.286 11:03:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.286 11:03:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.286 11:03:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.286 11:03:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.286 11:03:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.286 11:03:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.286 11:03:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.286 11:03:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.286 11:03:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.286 11:03:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.286 11:03:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:43.286 11:03:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:43.286 11:03:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:43.286 11:03:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.286 11:03:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.286 11:03:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.286 11:03:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:43.286 11:03:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:43.286 11:03:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.286 11:03:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.286 11:03:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.286 11:03:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.286 11:03:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.545 11:03:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:43.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:43.545 00:15:43.545 --- 10.0.0.2 ping statistics --- 00:15:43.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.545 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:43.545 11:03:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:43.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:43.545 00:15:43.545 --- 10.0.0.3 ping statistics --- 00:15:43.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.545 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:43.545 11:03:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:43.545 00:15:43.545 --- 10.0.0.1 ping statistics --- 00:15:43.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.545 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:43.545 11:03:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.545 11:03:11 -- nvmf/common.sh@422 -- # return 0 00:15:43.545 11:03:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:43.545 11:03:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.545 11:03:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:43.545 11:03:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:43.545 11:03:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.545 11:03:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:43.545 11:03:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:43.545 11:03:11 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:43.545 11:03:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:43.545 11:03:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:43.545 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:43.545 11:03:11 -- nvmf/common.sh@470 -- # nvmfpid=79119 00:15:43.545 11:03:11 -- nvmf/common.sh@471 -- # waitforlisten 79119 00:15:43.545 11:03:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.545 11:03:11 -- common/autotest_common.sh@817 -- # '[' -z 79119 ']' 00:15:43.545 11:03:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.545 11:03:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:43.545 11:03:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.545 11:03:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:43.545 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:43.545 [2024-04-18 11:03:12.032875] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:43.545 [2024-04-18 11:03:12.033231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.545 [2024-04-18 11:03:12.172702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.803 [2024-04-18 11:03:12.275142] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.803 [2024-04-18 11:03:12.275430] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.803 [2024-04-18 11:03:12.275626] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.803 [2024-04-18 11:03:12.275819] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.803 [2024-04-18 11:03:12.275867] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.803 [2024-04-18 11:03:12.276225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.803 [2024-04-18 11:03:12.276309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.803 [2024-04-18 11:03:12.277081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.803 [2024-04-18 11:03:12.277107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.737 11:03:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:44.737 11:03:13 -- common/autotest_common.sh@850 -- # return 0 00:15:44.737 11:03:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:44.737 11:03:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.737 11:03:13 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 [2024-04-18 11:03:13.073379] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@26 -- # seq 1 4 00:15:44.737 11:03:13 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:44.737 11:03:13 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 Null1 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 [2024-04-18 11:03:13.137481] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:44.737 11:03:13 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 Null2 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:44.737 11:03:13 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 Null3 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:44.737 11:03:13 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 Null4 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.737 11:03:13 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 4420 00:15:44.737 00:15:44.737 Discovery Log Number of Records 6, Generation counter 6 00:15:44.737 =====Discovery Log Entry 0====== 00:15:44.737 trtype: tcp 00:15:44.737 adrfam: ipv4 00:15:44.737 subtype: current discovery subsystem 00:15:44.737 treq: not required 00:15:44.737 portid: 0 00:15:44.737 trsvcid: 4420 00:15:44.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:44.737 traddr: 10.0.0.2 00:15:44.737 eflags: explicit discovery connections, duplicate discovery information 00:15:44.737 sectype: none 00:15:44.737 =====Discovery Log Entry 1====== 00:15:44.737 trtype: tcp 00:15:44.737 adrfam: ipv4 00:15:44.737 subtype: nvme subsystem 00:15:44.737 treq: not required 00:15:44.737 portid: 0 00:15:44.737 trsvcid: 4420 00:15:44.737 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:44.737 traddr: 10.0.0.2 00:15:44.737 eflags: none 00:15:44.737 sectype: none 00:15:44.737 =====Discovery Log Entry 2====== 00:15:44.737 trtype: tcp 00:15:44.737 adrfam: ipv4 00:15:44.737 subtype: nvme subsystem 00:15:44.737 treq: not required 00:15:44.737 portid: 0 00:15:44.737 trsvcid: 4420 00:15:44.737 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:44.737 traddr: 10.0.0.2 00:15:44.737 eflags: none 00:15:44.737 sectype: none 00:15:44.737 =====Discovery Log Entry 3====== 00:15:44.737 trtype: tcp 00:15:44.737 adrfam: ipv4 00:15:44.737 subtype: nvme subsystem 00:15:44.737 treq: not required 00:15:44.737 portid: 0 00:15:44.737 trsvcid: 4420 00:15:44.737 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:44.737 traddr: 10.0.0.2 00:15:44.737 eflags: none 00:15:44.737 sectype: none 00:15:44.737 =====Discovery Log Entry 4====== 00:15:44.737 trtype: tcp 00:15:44.737 adrfam: ipv4 00:15:44.737 subtype: nvme subsystem 00:15:44.737 treq: not required 00:15:44.737 portid: 0 00:15:44.737 trsvcid: 4420 00:15:44.737 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:44.737 traddr: 10.0.0.2 00:15:44.737 eflags: none 00:15:44.737 sectype: none 00:15:44.737 =====Discovery Log Entry 5====== 00:15:44.737 trtype: tcp 00:15:44.737 adrfam: ipv4 00:15:44.737 subtype: discovery subsystem referral 00:15:44.737 treq: not required 00:15:44.737 portid: 0 00:15:44.737 trsvcid: 4430 00:15:44.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:44.737 traddr: 10.0.0.2 00:15:44.737 eflags: none 00:15:44.737 sectype: none 00:15:44.737 Perform nvmf subsystem discovery via RPC 00:15:44.737 11:03:13 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:44.737 11:03:13 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:44.737 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.737 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.737 [2024-04-18 11:03:13.337482] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:44.737 [ 00:15:44.737 { 00:15:44.737 "allow_any_host": true, 00:15:44.737 "hosts": [], 00:15:44.737 "listen_addresses": [ 00:15:44.737 { 00:15:44.737 "adrfam": "IPv4", 00:15:44.737 "traddr": "10.0.0.2", 00:15:44.737 "transport": "TCP", 00:15:44.737 "trsvcid": "4420", 00:15:44.737 "trtype": "TCP" 00:15:44.737 } 00:15:44.737 ], 00:15:44.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.737 "subtype": "Discovery" 00:15:44.737 }, 00:15:44.737 { 00:15:44.737 "allow_any_host": true, 00:15:44.737 "hosts": [], 00:15:44.737 "listen_addresses": [ 00:15:44.737 { 00:15:44.737 "adrfam": "IPv4", 00:15:44.737 "traddr": "10.0.0.2", 00:15:44.737 "transport": "TCP", 00:15:44.737 "trsvcid": "4420", 00:15:44.737 "trtype": "TCP" 00:15:44.737 } 00:15:44.737 ], 00:15:44.737 "max_cntlid": 65519, 00:15:44.737 "max_namespaces": 32, 00:15:44.737 "min_cntlid": 1, 00:15:44.737 "model_number": "SPDK bdev Controller", 00:15:44.737 "namespaces": [ 00:15:44.738 { 00:15:44.738 "bdev_name": "Null1", 00:15:44.738 "name": "Null1", 00:15:44.738 "nguid": "1B506D9BCEBE449F86992F081B2236C8", 00:15:44.738 "nsid": 1, 00:15:44.738 "uuid": "1b506d9b-cebe-449f-8699-2f081b2236c8" 00:15:44.738 } 00:15:44.738 ], 00:15:44.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.738 "serial_number": "SPDK00000000000001", 00:15:44.738 "subtype": "NVMe" 00:15:44.738 }, 00:15:44.738 { 00:15:44.738 "allow_any_host": true, 00:15:44.738 "hosts": [], 00:15:44.738 "listen_addresses": [ 00:15:44.738 { 00:15:44.738 "adrfam": "IPv4", 00:15:44.738 "traddr": "10.0.0.2", 00:15:44.738 "transport": "TCP", 00:15:44.738 "trsvcid": "4420", 00:15:44.738 "trtype": "TCP" 00:15:44.738 } 00:15:44.738 ], 00:15:44.738 "max_cntlid": 65519, 00:15:44.738 "max_namespaces": 32, 00:15:44.738 "min_cntlid": 1, 00:15:44.738 "model_number": "SPDK bdev Controller", 00:15:44.738 "namespaces": [ 00:15:44.738 { 00:15:44.738 "bdev_name": "Null2", 00:15:44.738 "name": "Null2", 00:15:44.738 "nguid": "6E4483ADC5E24B75935C97E75FA1EFF3", 00:15:44.738 "nsid": 1, 00:15:44.738 "uuid": "6e4483ad-c5e2-4b75-935c-97e75fa1eff3" 00:15:44.738 } 00:15:44.738 ], 00:15:44.738 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:44.738 "serial_number": "SPDK00000000000002", 00:15:44.738 "subtype": "NVMe" 00:15:44.738 }, 00:15:44.738 { 00:15:44.738 "allow_any_host": true, 00:15:44.738 "hosts": [], 00:15:44.738 "listen_addresses": [ 00:15:44.738 { 00:15:44.738 "adrfam": "IPv4", 00:15:44.738 "traddr": "10.0.0.2", 00:15:44.738 "transport": "TCP", 00:15:44.738 "trsvcid": "4420", 00:15:44.738 "trtype": "TCP" 00:15:44.738 } 00:15:44.738 ], 00:15:44.738 "max_cntlid": 65519, 00:15:44.738 "max_namespaces": 32, 00:15:44.738 "min_cntlid": 1, 00:15:44.738 "model_number": "SPDK bdev Controller", 00:15:44.738 "namespaces": [ 00:15:44.738 { 00:15:44.738 "bdev_name": "Null3", 00:15:44.738 "name": "Null3", 00:15:44.738 "nguid": "AEC0B057CBC94CEDA436A975C7E7E911", 00:15:44.738 "nsid": 1, 00:15:44.738 "uuid": "aec0b057-cbc9-4ced-a436-a975c7e7e911" 00:15:44.738 } 00:15:44.738 ], 00:15:44.738 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:44.738 "serial_number": "SPDK00000000000003", 00:15:44.738 "subtype": "NVMe" 00:15:44.738 }, 00:15:44.738 { 00:15:44.738 "allow_any_host": true, 00:15:44.738 "hosts": [], 00:15:44.738 "listen_addresses": [ 00:15:44.738 { 00:15:44.738 "adrfam": "IPv4", 00:15:44.738 "traddr": "10.0.0.2", 00:15:44.738 "transport": "TCP", 00:15:44.738 "trsvcid": "4420", 00:15:44.738 "trtype": "TCP" 00:15:44.738 } 00:15:44.738 ], 00:15:44.738 "max_cntlid": 65519, 00:15:44.738 "max_namespaces": 32, 00:15:44.738 "min_cntlid": 1, 00:15:44.738 "model_number": "SPDK bdev Controller", 00:15:44.738 "namespaces": [ 00:15:44.738 { 00:15:44.738 "bdev_name": "Null4", 00:15:44.738 "name": "Null4", 00:15:44.738 "nguid": "9BE494FF71F84418819C9BAF08850190", 00:15:44.738 "nsid": 1, 00:15:44.738 "uuid": "9be494ff-71f8-4418-819c-9baf08850190" 00:15:44.738 } 00:15:44.738 ], 00:15:44.738 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:44.738 "serial_number": "SPDK00000000000004", 00:15:44.738 "subtype": "NVMe" 00:15:44.738 } 00:15:44.738 ] 00:15:44.738 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.738 11:03:13 -- target/discovery.sh@42 -- # seq 1 4 00:15:44.738 11:03:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:44.738 11:03:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.738 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.738 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:44.996 11:03:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:44.996 11:03:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:44.996 11:03:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:44.996 11:03:13 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:44.996 11:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.996 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.996 11:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.996 11:03:13 -- target/discovery.sh@49 -- # check_bdevs= 00:15:44.996 11:03:13 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:44.996 11:03:13 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:44.996 11:03:13 -- target/discovery.sh@57 -- # nvmftestfini 00:15:44.996 11:03:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:44.996 11:03:13 -- nvmf/common.sh@117 -- # sync 00:15:44.996 11:03:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.996 11:03:13 -- nvmf/common.sh@120 -- # set +e 00:15:44.996 11:03:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.996 11:03:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.996 rmmod nvme_tcp 00:15:44.996 rmmod nvme_fabrics 00:15:44.996 rmmod nvme_keyring 00:15:44.996 11:03:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.996 11:03:13 -- nvmf/common.sh@124 -- # set -e 00:15:44.996 11:03:13 -- nvmf/common.sh@125 -- # return 0 00:15:44.996 11:03:13 -- nvmf/common.sh@478 -- # '[' -n 79119 ']' 00:15:44.996 11:03:13 -- nvmf/common.sh@479 -- # killprocess 79119 00:15:44.996 11:03:13 -- common/autotest_common.sh@936 -- # '[' -z 79119 ']' 00:15:44.996 11:03:13 -- common/autotest_common.sh@940 -- # kill -0 79119 00:15:44.996 11:03:13 -- common/autotest_common.sh@941 -- # uname 00:15:44.996 11:03:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.996 11:03:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79119 00:15:44.996 11:03:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.996 11:03:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.996 killing process with pid 79119 00:15:44.996 11:03:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79119' 00:15:44.996 11:03:13 -- common/autotest_common.sh@955 -- # kill 79119 00:15:44.996 [2024-04-18 11:03:13.598029] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:44.996 11:03:13 -- common/autotest_common.sh@960 -- # wait 79119 00:15:45.266 11:03:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:45.266 11:03:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:45.266 11:03:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:45.266 11:03:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.266 11:03:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:45.266 11:03:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.266 11:03:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.266 11:03:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.266 11:03:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:45.266 00:15:45.266 real 0m2.359s 00:15:45.266 user 0m6.434s 00:15:45.266 sys 0m0.612s 00:15:45.266 11:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:45.266 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:45.266 ************************************ 00:15:45.266 END TEST nvmf_discovery 00:15:45.266 ************************************ 00:15:45.266 11:03:13 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:45.266 11:03:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:45.266 11:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:45.266 11:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:45.532 ************************************ 00:15:45.532 START TEST nvmf_referrals 00:15:45.532 ************************************ 00:15:45.532 11:03:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:45.532 * Looking for test storage... 00:15:45.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.532 11:03:14 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.532 11:03:14 -- nvmf/common.sh@7 -- # uname -s 00:15:45.532 11:03:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.532 11:03:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.532 11:03:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.532 11:03:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.532 11:03:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.532 11:03:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.532 11:03:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.532 11:03:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.532 11:03:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.532 11:03:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.532 11:03:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:45.532 11:03:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:45.532 11:03:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.532 11:03:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.532 11:03:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.532 11:03:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.532 11:03:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.532 11:03:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.532 11:03:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.532 11:03:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.532 11:03:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 11:03:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 11:03:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 11:03:14 -- paths/export.sh@5 -- # export PATH 00:15:45.532 11:03:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 11:03:14 -- nvmf/common.sh@47 -- # : 0 00:15:45.532 11:03:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.532 11:03:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.532 11:03:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.532 11:03:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.532 11:03:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.532 11:03:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.532 11:03:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.532 11:03:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.532 11:03:14 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:45.532 11:03:14 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:45.532 11:03:14 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:45.532 11:03:14 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:45.532 11:03:14 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:45.532 11:03:14 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:45.532 11:03:14 -- target/referrals.sh@37 -- # nvmftestinit 00:15:45.532 11:03:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:45.532 11:03:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.532 11:03:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:45.532 11:03:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:45.532 11:03:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:45.532 11:03:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.532 11:03:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.532 11:03:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.532 11:03:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:45.532 11:03:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:45.532 11:03:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:45.532 11:03:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:45.532 11:03:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:45.532 11:03:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:45.532 11:03:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.532 11:03:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.532 11:03:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:45.532 11:03:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:45.532 11:03:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.532 11:03:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.532 11:03:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.532 11:03:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.532 11:03:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.532 11:03:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.532 11:03:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.532 11:03:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.532 11:03:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:45.532 11:03:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:45.532 Cannot find device "nvmf_tgt_br" 00:15:45.532 11:03:14 -- nvmf/common.sh@155 -- # true 00:15:45.532 11:03:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.532 Cannot find device "nvmf_tgt_br2" 00:15:45.532 11:03:14 -- nvmf/common.sh@156 -- # true 00:15:45.532 11:03:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:45.532 11:03:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:45.532 Cannot find device "nvmf_tgt_br" 00:15:45.532 11:03:14 -- nvmf/common.sh@158 -- # true 00:15:45.532 11:03:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:45.532 Cannot find device "nvmf_tgt_br2" 00:15:45.532 11:03:14 -- nvmf/common.sh@159 -- # true 00:15:45.532 11:03:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:45.533 11:03:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:45.791 11:03:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.791 11:03:14 -- nvmf/common.sh@162 -- # true 00:15:45.791 11:03:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.791 11:03:14 -- nvmf/common.sh@163 -- # true 00:15:45.791 11:03:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.791 11:03:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.791 11:03:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.791 11:03:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.791 11:03:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.791 11:03:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.791 11:03:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.791 11:03:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:45.791 11:03:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:45.791 11:03:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:45.791 11:03:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:45.791 11:03:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:45.791 11:03:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:45.791 11:03:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.791 11:03:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.791 11:03:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.791 11:03:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:45.791 11:03:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:45.791 11:03:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.791 11:03:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.791 11:03:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.791 11:03:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.791 11:03:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.791 11:03:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:45.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:45.791 00:15:45.791 --- 10.0.0.2 ping statistics --- 00:15:45.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.791 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:45.791 11:03:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:45.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:45.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:45.791 00:15:45.791 --- 10.0.0.3 ping statistics --- 00:15:45.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.791 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:45.791 11:03:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:45.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:45.791 00:15:45.791 --- 10.0.0.1 ping statistics --- 00:15:45.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.791 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:45.791 11:03:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.791 11:03:14 -- nvmf/common.sh@422 -- # return 0 00:15:45.791 11:03:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:45.791 11:03:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.791 11:03:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:45.791 11:03:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:45.791 11:03:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.791 11:03:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:45.791 11:03:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:45.791 11:03:14 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:45.791 11:03:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:45.791 11:03:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:45.791 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:15:45.791 11:03:14 -- nvmf/common.sh@470 -- # nvmfpid=79350 00:15:45.791 11:03:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:45.791 11:03:14 -- nvmf/common.sh@471 -- # waitforlisten 79350 00:15:45.791 11:03:14 -- common/autotest_common.sh@817 -- # '[' -z 79350 ']' 00:15:45.792 11:03:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.792 11:03:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:45.792 11:03:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.792 11:03:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:45.792 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:15:46.051 [2024-04-18 11:03:14.463583] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:46.051 [2024-04-18 11:03:14.464494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.051 [2024-04-18 11:03:14.608623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.309 [2024-04-18 11:03:14.709010] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.309 [2024-04-18 11:03:14.709099] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.309 [2024-04-18 11:03:14.709114] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.309 [2024-04-18 11:03:14.709124] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.309 [2024-04-18 11:03:14.709133] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.309 [2024-04-18 11:03:14.709317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.309 [2024-04-18 11:03:14.709418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.309 [2024-04-18 11:03:14.710151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.309 [2024-04-18 11:03:14.710164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.876 11:03:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:46.876 11:03:15 -- common/autotest_common.sh@850 -- # return 0 00:15:46.876 11:03:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:46.876 11:03:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:46.876 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:46.876 11:03:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.876 11:03:15 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.876 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.876 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:46.876 [2024-04-18 11:03:15.512306] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.136 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:47.136 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.136 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 [2024-04-18 11:03:15.532899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:47.136 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:47.136 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.136 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:47.136 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.136 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:47.136 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.136 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:47.136 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.136 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 11:03:15 -- target/referrals.sh@48 -- # jq length 00:15:47.136 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:47.136 11:03:15 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:47.136 11:03:15 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:47.136 11:03:15 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:47.136 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.136 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 11:03:15 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:47.136 11:03:15 -- target/referrals.sh@21 -- # sort 00:15:47.136 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:47.136 11:03:15 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:47.136 11:03:15 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:47.136 11:03:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:47.136 11:03:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:47.136 11:03:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.136 11:03:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:47.136 11:03:15 -- target/referrals.sh@26 -- # sort 00:15:47.395 11:03:15 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:47.395 11:03:15 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:47.395 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.395 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.395 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:47.395 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.395 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.395 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:47.395 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.395 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.395 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:47.395 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.395 11:03:15 -- target/referrals.sh@56 -- # jq length 00:15:47.395 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.395 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:47.395 11:03:15 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:47.395 11:03:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:47.395 11:03:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:47.395 11:03:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.395 11:03:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:47.395 11:03:15 -- target/referrals.sh@26 -- # sort 00:15:47.395 11:03:15 -- target/referrals.sh@26 -- # echo 00:15:47.395 11:03:15 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:47.395 11:03:15 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:47.395 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.395 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.395 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:47.395 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.395 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.395 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:47.395 11:03:15 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:47.395 11:03:15 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:47.395 11:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.395 11:03:15 -- common/autotest_common.sh@10 -- # set +x 00:15:47.395 11:03:15 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:47.395 11:03:15 -- target/referrals.sh@21 -- # sort 00:15:47.395 11:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:47.395 11:03:15 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:47.395 11:03:15 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:47.395 11:03:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:47.395 11:03:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:47.395 11:03:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.395 11:03:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:47.395 11:03:16 -- target/referrals.sh@26 -- # sort 00:15:47.655 11:03:16 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:47.655 11:03:16 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:47.655 11:03:16 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:47.655 11:03:16 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:47.655 11:03:16 -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:47.655 11:03:16 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.655 11:03:16 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:47.655 11:03:16 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:47.655 11:03:16 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:47.655 11:03:16 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:47.655 11:03:16 -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:47.655 11:03:16 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:47.655 11:03:16 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.655 11:03:16 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:47.655 11:03:16 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:47.655 11:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.655 11:03:16 -- common/autotest_common.sh@10 -- # set +x 00:15:47.655 11:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.655 11:03:16 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:47.655 11:03:16 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:47.655 11:03:16 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:47.655 11:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.655 11:03:16 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:47.655 11:03:16 -- common/autotest_common.sh@10 -- # set +x 00:15:47.655 11:03:16 -- target/referrals.sh@21 -- # sort 00:15:47.655 11:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.655 11:03:16 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:47.655 11:03:16 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:47.655 11:03:16 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:47.655 11:03:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:47.655 11:03:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:47.655 11:03:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.655 11:03:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:47.655 11:03:16 -- target/referrals.sh@26 -- # sort 00:15:47.915 11:03:16 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:47.915 11:03:16 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:47.915 11:03:16 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:47.915 11:03:16 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:47.915 11:03:16 -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:47.915 11:03:16 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.915 11:03:16 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:47.915 11:03:16 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:47.915 11:03:16 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:47.915 11:03:16 -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:47.915 11:03:16 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:47.915 11:03:16 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:47.915 11:03:16 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:47.915 11:03:16 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:47.915 11:03:16 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:47.915 11:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.915 11:03:16 -- common/autotest_common.sh@10 -- # set +x 00:15:47.915 11:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.915 11:03:16 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:47.915 11:03:16 -- target/referrals.sh@82 -- # jq length 00:15:47.915 11:03:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.915 11:03:16 -- common/autotest_common.sh@10 -- # set +x 00:15:47.915 11:03:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.174 11:03:16 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:48.174 11:03:16 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:48.174 11:03:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:48.174 11:03:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:48.174 11:03:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:48.174 11:03:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:48.174 11:03:16 -- target/referrals.sh@26 -- # sort 00:15:48.174 11:03:16 -- target/referrals.sh@26 -- # echo 00:15:48.174 11:03:16 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:48.174 11:03:16 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:48.174 11:03:16 -- target/referrals.sh@86 -- # nvmftestfini 00:15:48.174 11:03:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:48.174 11:03:16 -- nvmf/common.sh@117 -- # sync 00:15:48.174 11:03:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.174 11:03:16 -- nvmf/common.sh@120 -- # set +e 00:15:48.174 11:03:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.174 11:03:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.174 rmmod nvme_tcp 00:15:48.174 rmmod nvme_fabrics 00:15:48.174 rmmod nvme_keyring 00:15:48.174 11:03:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.174 11:03:16 -- nvmf/common.sh@124 -- # set -e 00:15:48.174 11:03:16 -- nvmf/common.sh@125 -- # return 0 00:15:48.174 11:03:16 -- nvmf/common.sh@478 -- # '[' -n 79350 ']' 00:15:48.174 11:03:16 -- nvmf/common.sh@479 -- # killprocess 79350 00:15:48.174 11:03:16 -- common/autotest_common.sh@936 -- # '[' -z 79350 ']' 00:15:48.174 11:03:16 -- common/autotest_common.sh@940 -- # kill -0 79350 00:15:48.174 11:03:16 -- common/autotest_common.sh@941 -- # uname 00:15:48.174 11:03:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.174 11:03:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79350 00:15:48.174 11:03:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.174 11:03:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.174 killing process with pid 79350 00:15:48.174 11:03:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79350' 00:15:48.174 11:03:16 -- common/autotest_common.sh@955 -- # kill 79350 00:15:48.174 11:03:16 -- common/autotest_common.sh@960 -- # wait 79350 00:15:48.432 11:03:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:48.432 11:03:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:48.432 11:03:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:48.432 11:03:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.432 11:03:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.432 11:03:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.432 11:03:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.432 11:03:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.432 11:03:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:48.432 00:15:48.432 real 0m3.052s 00:15:48.432 user 0m9.906s 00:15:48.432 sys 0m0.846s 00:15:48.432 11:03:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.432 11:03:17 -- common/autotest_common.sh@10 -- # set +x 00:15:48.432 ************************************ 00:15:48.432 END TEST nvmf_referrals 00:15:48.432 ************************************ 00:15:48.432 11:03:17 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:48.432 11:03:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.432 11:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.432 11:03:17 -- common/autotest_common.sh@10 -- # set +x 00:15:48.690 ************************************ 00:15:48.690 START TEST nvmf_connect_disconnect 00:15:48.690 ************************************ 00:15:48.690 11:03:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:48.690 * Looking for test storage... 00:15:48.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.690 11:03:17 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.690 11:03:17 -- nvmf/common.sh@7 -- # uname -s 00:15:48.690 11:03:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.690 11:03:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.690 11:03:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.690 11:03:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.690 11:03:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.690 11:03:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.690 11:03:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.690 11:03:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.690 11:03:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.691 11:03:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.691 11:03:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:48.691 11:03:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:15:48.691 11:03:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.691 11:03:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.691 11:03:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.691 11:03:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.691 11:03:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.691 11:03:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.691 11:03:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.691 11:03:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.691 11:03:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.691 11:03:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.691 11:03:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.691 11:03:17 -- paths/export.sh@5 -- # export PATH 00:15:48.691 11:03:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.691 11:03:17 -- nvmf/common.sh@47 -- # : 0 00:15:48.691 11:03:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.691 11:03:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.691 11:03:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.691 11:03:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.691 11:03:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.691 11:03:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.691 11:03:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.691 11:03:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.691 11:03:17 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.691 11:03:17 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.691 11:03:17 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:48.691 11:03:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:48.691 11:03:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.691 11:03:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:48.691 11:03:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:48.691 11:03:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:48.691 11:03:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.691 11:03:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.691 11:03:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.691 11:03:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:48.691 11:03:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:48.691 11:03:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:48.691 11:03:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:48.691 11:03:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:48.691 11:03:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:48.691 11:03:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.691 11:03:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.691 11:03:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.691 11:03:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:48.691 11:03:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.691 11:03:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.691 11:03:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.691 11:03:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.691 11:03:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.691 11:03:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.691 11:03:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.691 11:03:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.691 11:03:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:48.691 11:03:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:48.691 Cannot find device "nvmf_tgt_br" 00:15:48.691 11:03:17 -- nvmf/common.sh@155 -- # true 00:15:48.691 11:03:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.691 Cannot find device "nvmf_tgt_br2" 00:15:48.691 11:03:17 -- nvmf/common.sh@156 -- # true 00:15:48.691 11:03:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:48.691 11:03:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:48.691 Cannot find device "nvmf_tgt_br" 00:15:48.691 11:03:17 -- nvmf/common.sh@158 -- # true 00:15:48.691 11:03:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:48.691 Cannot find device "nvmf_tgt_br2" 00:15:48.691 11:03:17 -- nvmf/common.sh@159 -- # true 00:15:48.691 11:03:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:48.956 11:03:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:48.956 11:03:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.956 11:03:17 -- nvmf/common.sh@162 -- # true 00:15:48.956 11:03:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.956 11:03:17 -- nvmf/common.sh@163 -- # true 00:15:48.956 11:03:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.956 11:03:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.956 11:03:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.956 11:03:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.956 11:03:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.956 11:03:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.956 11:03:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.956 11:03:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.956 11:03:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.956 11:03:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:48.956 11:03:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:48.956 11:03:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:48.956 11:03:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:48.956 11:03:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.956 11:03:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.956 11:03:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.956 11:03:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:48.956 11:03:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:48.956 11:03:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.956 11:03:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.235 11:03:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.235 11:03:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.235 11:03:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.235 11:03:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:49.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:49.235 00:15:49.235 --- 10.0.0.2 ping statistics --- 00:15:49.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.235 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:49.235 11:03:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:49.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:49.235 00:15:49.235 --- 10.0.0.3 ping statistics --- 00:15:49.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.235 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:49.235 11:03:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:49.235 00:15:49.235 --- 10.0.0.1 ping statistics --- 00:15:49.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.235 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:49.235 11:03:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.235 11:03:17 -- nvmf/common.sh@422 -- # return 0 00:15:49.235 11:03:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:49.235 11:03:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.235 11:03:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:49.235 11:03:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:49.235 11:03:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.235 11:03:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:49.235 11:03:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:49.235 11:03:17 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:49.235 11:03:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:49.235 11:03:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:49.235 11:03:17 -- common/autotest_common.sh@10 -- # set +x 00:15:49.235 11:03:17 -- nvmf/common.sh@470 -- # nvmfpid=79662 00:15:49.235 11:03:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:49.235 11:03:17 -- nvmf/common.sh@471 -- # waitforlisten 79662 00:15:49.235 11:03:17 -- common/autotest_common.sh@817 -- # '[' -z 79662 ']' 00:15:49.235 11:03:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.235 11:03:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.235 11:03:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.235 11:03:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.235 11:03:17 -- common/autotest_common.sh@10 -- # set +x 00:15:49.235 [2024-04-18 11:03:17.723765] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:49.235 [2024-04-18 11:03:17.723878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.235 [2024-04-18 11:03:17.869066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.494 [2024-04-18 11:03:17.971140] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.494 [2024-04-18 11:03:17.971257] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.494 [2024-04-18 11:03:17.971279] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.494 [2024-04-18 11:03:17.971289] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.494 [2024-04-18 11:03:17.971299] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.494 [2024-04-18 11:03:17.971421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.494 [2024-04-18 11:03:17.971591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.494 [2024-04-18 11:03:17.972223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.494 [2024-04-18 11:03:17.972234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.060 11:03:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.060 11:03:18 -- common/autotest_common.sh@850 -- # return 0 00:15:50.060 11:03:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:50.060 11:03:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:50.060 11:03:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.321 11:03:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.321 11:03:18 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:50.321 11:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.321 11:03:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.321 [2024-04-18 11:03:18.720872] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.321 11:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.321 11:03:18 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:50.321 11:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.322 11:03:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.322 11:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:50.322 11:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.322 11:03:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.322 11:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.322 11:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.322 11:03:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.322 11:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.322 11:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.322 11:03:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.322 [2024-04-18 11:03:18.786491] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.322 11:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:15:50.322 11:03:18 -- target/connect_disconnect.sh@34 -- # set +x 00:15:52.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:37.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:44.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:00.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:18.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:23.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.899 11:07:02 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:33.899 11:07:02 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:33.899 11:07:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:33.899 11:07:02 -- nvmf/common.sh@117 -- # sync 00:19:33.899 11:07:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.899 11:07:02 -- nvmf/common.sh@120 -- # set +e 00:19:33.899 11:07:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.899 11:07:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.899 rmmod nvme_tcp 00:19:33.899 rmmod nvme_fabrics 00:19:33.899 rmmod nvme_keyring 00:19:33.899 11:07:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.899 11:07:02 -- nvmf/common.sh@124 -- # set -e 00:19:33.899 11:07:02 -- nvmf/common.sh@125 -- # return 0 00:19:33.899 11:07:02 -- nvmf/common.sh@478 -- # '[' -n 79662 ']' 00:19:33.899 11:07:02 -- nvmf/common.sh@479 -- # killprocess 79662 00:19:33.899 11:07:02 -- common/autotest_common.sh@936 -- # '[' -z 79662 ']' 00:19:33.899 11:07:02 -- common/autotest_common.sh@940 -- # kill -0 79662 00:19:33.899 11:07:02 -- common/autotest_common.sh@941 -- # uname 00:19:33.899 11:07:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:33.899 11:07:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79662 00:19:33.899 11:07:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:33.899 11:07:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:33.899 killing process with pid 79662 00:19:33.899 11:07:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79662' 00:19:33.899 11:07:02 -- common/autotest_common.sh@955 -- # kill 79662 00:19:33.899 11:07:02 -- common/autotest_common.sh@960 -- # wait 79662 00:19:34.158 11:07:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:34.158 11:07:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:34.158 11:07:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:34.158 11:07:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.158 11:07:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:34.158 11:07:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.158 11:07:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.158 11:07:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.417 11:07:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:34.417 00:19:34.417 real 3m45.708s 00:19:34.417 user 14m37.788s 00:19:34.417 sys 0m22.702s 00:19:34.417 11:07:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:34.417 11:07:02 -- common/autotest_common.sh@10 -- # set +x 00:19:34.417 ************************************ 00:19:34.417 END TEST nvmf_connect_disconnect 00:19:34.417 ************************************ 00:19:34.417 11:07:02 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:34.417 11:07:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:34.417 11:07:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:34.417 11:07:02 -- common/autotest_common.sh@10 -- # set +x 00:19:34.417 ************************************ 00:19:34.417 START TEST nvmf_multitarget 00:19:34.417 ************************************ 00:19:34.417 11:07:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:34.417 * Looking for test storage... 00:19:34.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:34.417 11:07:03 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:34.417 11:07:03 -- nvmf/common.sh@7 -- # uname -s 00:19:34.417 11:07:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.417 11:07:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.417 11:07:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.417 11:07:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.417 11:07:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.417 11:07:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.417 11:07:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.417 11:07:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.417 11:07:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.417 11:07:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.675 11:07:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:34.675 11:07:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:34.675 11:07:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.675 11:07:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.675 11:07:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:34.675 11:07:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.675 11:07:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:34.675 11:07:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.675 11:07:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.675 11:07:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.675 11:07:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.675 11:07:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.675 11:07:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.675 11:07:03 -- paths/export.sh@5 -- # export PATH 00:19:34.675 11:07:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.675 11:07:03 -- nvmf/common.sh@47 -- # : 0 00:19:34.675 11:07:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.675 11:07:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.675 11:07:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.675 11:07:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.675 11:07:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.675 11:07:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.675 11:07:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.675 11:07:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.675 11:07:03 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:19:34.675 11:07:03 -- target/multitarget.sh@15 -- # nvmftestinit 00:19:34.675 11:07:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:34.675 11:07:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.675 11:07:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:34.675 11:07:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:34.675 11:07:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:34.675 11:07:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.675 11:07:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.675 11:07:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.675 11:07:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:34.676 11:07:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:34.676 11:07:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:34.676 11:07:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:34.676 11:07:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:34.676 11:07:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:34.676 11:07:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.676 11:07:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.676 11:07:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:34.676 11:07:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:34.676 11:07:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:34.676 11:07:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:34.676 11:07:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:34.676 11:07:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.676 11:07:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:34.676 11:07:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:34.676 11:07:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:34.676 11:07:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:34.676 11:07:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:34.676 11:07:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:34.676 Cannot find device "nvmf_tgt_br" 00:19:34.676 11:07:03 -- nvmf/common.sh@155 -- # true 00:19:34.676 11:07:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.676 Cannot find device "nvmf_tgt_br2" 00:19:34.676 11:07:03 -- nvmf/common.sh@156 -- # true 00:19:34.676 11:07:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:34.676 11:07:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:34.676 Cannot find device "nvmf_tgt_br" 00:19:34.676 11:07:03 -- nvmf/common.sh@158 -- # true 00:19:34.676 11:07:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:34.676 Cannot find device "nvmf_tgt_br2" 00:19:34.676 11:07:03 -- nvmf/common.sh@159 -- # true 00:19:34.676 11:07:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:34.676 11:07:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:34.676 11:07:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:34.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.676 11:07:03 -- nvmf/common.sh@162 -- # true 00:19:34.676 11:07:03 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:34.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.676 11:07:03 -- nvmf/common.sh@163 -- # true 00:19:34.676 11:07:03 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:34.676 11:07:03 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:34.676 11:07:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:34.676 11:07:03 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:34.676 11:07:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:34.676 11:07:03 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:34.676 11:07:03 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:34.676 11:07:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:34.676 11:07:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:34.676 11:07:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:34.676 11:07:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:34.676 11:07:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:34.676 11:07:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:34.935 11:07:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:34.935 11:07:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:34.935 11:07:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:34.935 11:07:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:34.935 11:07:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:34.936 11:07:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:34.936 11:07:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:34.936 11:07:03 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:34.936 11:07:03 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:34.936 11:07:03 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:34.936 11:07:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:34.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:19:34.936 00:19:34.936 --- 10.0.0.2 ping statistics --- 00:19:34.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.936 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:34.936 11:07:03 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:34.936 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:34.936 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:34.936 00:19:34.936 --- 10.0.0.3 ping statistics --- 00:19:34.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.936 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:34.936 11:07:03 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:34.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:19:34.936 00:19:34.936 --- 10.0.0.1 ping statistics --- 00:19:34.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.936 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:34.936 11:07:03 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.936 11:07:03 -- nvmf/common.sh@422 -- # return 0 00:19:34.936 11:07:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:34.936 11:07:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.936 11:07:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:34.936 11:07:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:34.936 11:07:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.936 11:07:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:34.936 11:07:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:34.936 11:07:03 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:34.936 11:07:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:34.936 11:07:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:34.936 11:07:03 -- common/autotest_common.sh@10 -- # set +x 00:19:34.936 11:07:03 -- nvmf/common.sh@470 -- # nvmfpid=83435 00:19:34.936 11:07:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:34.936 11:07:03 -- nvmf/common.sh@471 -- # waitforlisten 83435 00:19:34.936 11:07:03 -- common/autotest_common.sh@817 -- # '[' -z 83435 ']' 00:19:34.936 11:07:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.936 11:07:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:34.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.936 11:07:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.936 11:07:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:34.936 11:07:03 -- common/autotest_common.sh@10 -- # set +x 00:19:34.936 [2024-04-18 11:07:03.486405] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:34.936 [2024-04-18 11:07:03.486564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.194 [2024-04-18 11:07:03.625977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.194 [2024-04-18 11:07:03.731953] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.194 [2024-04-18 11:07:03.732337] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.194 [2024-04-18 11:07:03.732477] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.194 [2024-04-18 11:07:03.732625] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.194 [2024-04-18 11:07:03.732746] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.194 [2024-04-18 11:07:03.733011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.194 [2024-04-18 11:07:03.733071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.194 [2024-04-18 11:07:03.733160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.194 [2024-04-18 11:07:03.733161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.130 11:07:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:36.131 11:07:04 -- common/autotest_common.sh@850 -- # return 0 00:19:36.131 11:07:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:36.131 11:07:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:36.131 11:07:04 -- common/autotest_common.sh@10 -- # set +x 00:19:36.131 11:07:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.131 11:07:04 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:36.131 11:07:04 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:36.131 11:07:04 -- target/multitarget.sh@21 -- # jq length 00:19:36.131 11:07:04 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:36.131 11:07:04 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:36.389 "nvmf_tgt_1" 00:19:36.389 11:07:04 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:36.389 "nvmf_tgt_2" 00:19:36.389 11:07:04 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:36.389 11:07:04 -- target/multitarget.sh@28 -- # jq length 00:19:36.647 11:07:05 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:36.647 11:07:05 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:36.647 true 00:19:36.647 11:07:05 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:36.906 true 00:19:36.906 11:07:05 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:36.906 11:07:05 -- target/multitarget.sh@35 -- # jq length 00:19:36.906 11:07:05 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:36.906 11:07:05 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:36.906 11:07:05 -- target/multitarget.sh@41 -- # nvmftestfini 00:19:36.906 11:07:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:36.906 11:07:05 -- nvmf/common.sh@117 -- # sync 00:19:36.906 11:07:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.906 11:07:05 -- nvmf/common.sh@120 -- # set +e 00:19:36.906 11:07:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.906 11:07:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.906 rmmod nvme_tcp 00:19:36.906 rmmod nvme_fabrics 00:19:36.906 rmmod nvme_keyring 00:19:37.165 11:07:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.165 11:07:05 -- nvmf/common.sh@124 -- # set -e 00:19:37.165 11:07:05 -- nvmf/common.sh@125 -- # return 0 00:19:37.165 11:07:05 -- nvmf/common.sh@478 -- # '[' -n 83435 ']' 00:19:37.165 11:07:05 -- nvmf/common.sh@479 -- # killprocess 83435 00:19:37.165 11:07:05 -- common/autotest_common.sh@936 -- # '[' -z 83435 ']' 00:19:37.165 11:07:05 -- common/autotest_common.sh@940 -- # kill -0 83435 00:19:37.165 11:07:05 -- common/autotest_common.sh@941 -- # uname 00:19:37.165 11:07:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:37.165 11:07:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83435 00:19:37.165 killing process with pid 83435 00:19:37.165 11:07:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:37.165 11:07:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:37.165 11:07:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83435' 00:19:37.165 11:07:05 -- common/autotest_common.sh@955 -- # kill 83435 00:19:37.165 11:07:05 -- common/autotest_common.sh@960 -- # wait 83435 00:19:37.423 11:07:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:37.423 11:07:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:37.423 11:07:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:37.423 11:07:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.423 11:07:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.423 11:07:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.423 11:07:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.423 11:07:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.423 11:07:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:37.423 ************************************ 00:19:37.423 END TEST nvmf_multitarget 00:19:37.423 ************************************ 00:19:37.423 00:19:37.423 real 0m2.904s 00:19:37.423 user 0m9.434s 00:19:37.423 sys 0m0.758s 00:19:37.423 11:07:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:37.423 11:07:05 -- common/autotest_common.sh@10 -- # set +x 00:19:37.423 11:07:05 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:19:37.423 11:07:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:37.423 11:07:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.423 11:07:05 -- common/autotest_common.sh@10 -- # set +x 00:19:37.423 ************************************ 00:19:37.423 START TEST nvmf_rpc 00:19:37.423 ************************************ 00:19:37.423 11:07:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:19:37.682 * Looking for test storage... 00:19:37.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:37.682 11:07:06 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.682 11:07:06 -- nvmf/common.sh@7 -- # uname -s 00:19:37.682 11:07:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.682 11:07:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.682 11:07:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.682 11:07:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.682 11:07:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.682 11:07:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.682 11:07:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.682 11:07:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.682 11:07:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.682 11:07:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.682 11:07:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:37.682 11:07:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:37.682 11:07:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.682 11:07:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.682 11:07:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.682 11:07:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.682 11:07:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.682 11:07:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.682 11:07:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.682 11:07:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.682 11:07:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.682 11:07:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.682 11:07:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.682 11:07:06 -- paths/export.sh@5 -- # export PATH 00:19:37.682 11:07:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.682 11:07:06 -- nvmf/common.sh@47 -- # : 0 00:19:37.682 11:07:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.682 11:07:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.682 11:07:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.683 11:07:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.683 11:07:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.683 11:07:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.683 11:07:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.683 11:07:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.683 11:07:06 -- target/rpc.sh@11 -- # loops=5 00:19:37.683 11:07:06 -- target/rpc.sh@23 -- # nvmftestinit 00:19:37.683 11:07:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:37.683 11:07:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.683 11:07:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:37.683 11:07:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:37.683 11:07:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:37.683 11:07:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.683 11:07:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.683 11:07:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.683 11:07:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:37.683 11:07:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:37.683 11:07:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:37.683 11:07:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:37.683 11:07:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:37.683 11:07:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:37.683 11:07:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.683 11:07:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.683 11:07:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:37.683 11:07:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:37.683 11:07:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.683 11:07:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.683 11:07:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.683 11:07:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.683 11:07:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.683 11:07:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.683 11:07:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.683 11:07:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.683 11:07:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:37.683 11:07:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:37.683 Cannot find device "nvmf_tgt_br" 00:19:37.683 11:07:06 -- nvmf/common.sh@155 -- # true 00:19:37.683 11:07:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.683 Cannot find device "nvmf_tgt_br2" 00:19:37.683 11:07:06 -- nvmf/common.sh@156 -- # true 00:19:37.683 11:07:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:37.683 11:07:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:37.683 Cannot find device "nvmf_tgt_br" 00:19:37.683 11:07:06 -- nvmf/common.sh@158 -- # true 00:19:37.683 11:07:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:37.683 Cannot find device "nvmf_tgt_br2" 00:19:37.683 11:07:06 -- nvmf/common.sh@159 -- # true 00:19:37.683 11:07:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:37.683 11:07:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:37.683 11:07:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.683 11:07:06 -- nvmf/common.sh@162 -- # true 00:19:37.683 11:07:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.683 11:07:06 -- nvmf/common.sh@163 -- # true 00:19:37.683 11:07:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.683 11:07:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.683 11:07:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.683 11:07:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.683 11:07:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.683 11:07:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.683 11:07:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.683 11:07:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:37.683 11:07:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:37.941 11:07:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:37.941 11:07:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:37.941 11:07:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:37.941 11:07:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:37.941 11:07:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.941 11:07:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.941 11:07:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.941 11:07:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:37.941 11:07:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:37.941 11:07:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.941 11:07:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.941 11:07:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.941 11:07:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.941 11:07:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.941 11:07:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:37.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:19:37.941 00:19:37.941 --- 10.0.0.2 ping statistics --- 00:19:37.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.941 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:37.942 11:07:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:37.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:37.942 00:19:37.942 --- 10.0.0.3 ping statistics --- 00:19:37.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.942 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:37.942 11:07:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:37.942 00:19:37.942 --- 10.0.0.1 ping statistics --- 00:19:37.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.942 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:37.942 11:07:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.942 11:07:06 -- nvmf/common.sh@422 -- # return 0 00:19:37.942 11:07:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:37.942 11:07:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.942 11:07:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:37.942 11:07:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:37.942 11:07:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.942 11:07:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:37.942 11:07:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:37.942 11:07:06 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:19:37.942 11:07:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:37.942 11:07:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:37.942 11:07:06 -- common/autotest_common.sh@10 -- # set +x 00:19:37.942 11:07:06 -- nvmf/common.sh@470 -- # nvmfpid=83670 00:19:37.942 11:07:06 -- nvmf/common.sh@471 -- # waitforlisten 83670 00:19:37.942 11:07:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:37.942 11:07:06 -- common/autotest_common.sh@817 -- # '[' -z 83670 ']' 00:19:37.942 11:07:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.942 11:07:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:37.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.942 11:07:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.942 11:07:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:37.942 11:07:06 -- common/autotest_common.sh@10 -- # set +x 00:19:37.942 [2024-04-18 11:07:06.512246] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:37.942 [2024-04-18 11:07:06.512328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.200 [2024-04-18 11:07:06.650216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:38.200 [2024-04-18 11:07:06.751808] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.200 [2024-04-18 11:07:06.752183] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.200 [2024-04-18 11:07:06.752345] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.200 [2024-04-18 11:07:06.752475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.200 [2024-04-18 11:07:06.752523] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.201 [2024-04-18 11:07:06.752764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.201 [2024-04-18 11:07:06.752887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.201 [2024-04-18 11:07:06.752953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.201 [2024-04-18 11:07:06.752953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.135 11:07:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:39.135 11:07:07 -- common/autotest_common.sh@850 -- # return 0 00:19:39.135 11:07:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:39.135 11:07:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:39.135 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.135 11:07:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.135 11:07:07 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:19:39.135 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.135 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.135 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.135 11:07:07 -- target/rpc.sh@26 -- # stats='{ 00:19:39.135 "poll_groups": [ 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_0", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [] 00:19:39.135 }, 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_1", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [] 00:19:39.135 }, 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_2", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [] 00:19:39.135 }, 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_3", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [] 00:19:39.135 } 00:19:39.135 ], 00:19:39.135 "tick_rate": 2200000000 00:19:39.135 }' 00:19:39.135 11:07:07 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:19:39.135 11:07:07 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:19:39.135 11:07:07 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:19:39.135 11:07:07 -- target/rpc.sh@15 -- # wc -l 00:19:39.135 11:07:07 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:19:39.135 11:07:07 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:19:39.135 11:07:07 -- target/rpc.sh@29 -- # [[ null == null ]] 00:19:39.135 11:07:07 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.135 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.135 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.135 [2024-04-18 11:07:07.601528] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.135 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.135 11:07:07 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:19:39.135 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.135 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.135 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.135 11:07:07 -- target/rpc.sh@33 -- # stats='{ 00:19:39.135 "poll_groups": [ 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_0", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [ 00:19:39.135 { 00:19:39.135 "trtype": "TCP" 00:19:39.135 } 00:19:39.135 ] 00:19:39.135 }, 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_1", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [ 00:19:39.135 { 00:19:39.135 "trtype": "TCP" 00:19:39.135 } 00:19:39.135 ] 00:19:39.135 }, 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_2", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [ 00:19:39.135 { 00:19:39.135 "trtype": "TCP" 00:19:39.135 } 00:19:39.135 ] 00:19:39.135 }, 00:19:39.135 { 00:19:39.135 "admin_qpairs": 0, 00:19:39.135 "completed_nvme_io": 0, 00:19:39.135 "current_admin_qpairs": 0, 00:19:39.135 "current_io_qpairs": 0, 00:19:39.135 "io_qpairs": 0, 00:19:39.135 "name": "nvmf_tgt_poll_group_3", 00:19:39.135 "pending_bdev_io": 0, 00:19:39.135 "transports": [ 00:19:39.135 { 00:19:39.135 "trtype": "TCP" 00:19:39.135 } 00:19:39.135 ] 00:19:39.135 } 00:19:39.135 ], 00:19:39.135 "tick_rate": 2200000000 00:19:39.135 }' 00:19:39.135 11:07:07 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:19:39.135 11:07:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:39.135 11:07:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:39.135 11:07:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:39.135 11:07:07 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:19:39.135 11:07:07 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:19:39.135 11:07:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:39.135 11:07:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:39.135 11:07:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:39.135 11:07:07 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:19:39.135 11:07:07 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:19:39.135 11:07:07 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:19:39.135 11:07:07 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:19:39.135 11:07:07 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:39.135 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.135 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.135 Malloc1 00:19:39.135 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.135 11:07:07 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:39.135 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.135 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.393 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.393 11:07:07 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:39.393 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.393 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.393 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.393 11:07:07 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:19:39.393 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.393 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.393 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.393 11:07:07 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.393 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.393 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.393 [2024-04-18 11:07:07.802665] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.393 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.393 11:07:07 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -a 10.0.0.2 -s 4420 00:19:39.393 11:07:07 -- common/autotest_common.sh@638 -- # local es=0 00:19:39.393 11:07:07 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -a 10.0.0.2 -s 4420 00:19:39.393 11:07:07 -- common/autotest_common.sh@626 -- # local arg=nvme 00:19:39.393 11:07:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:39.393 11:07:07 -- common/autotest_common.sh@630 -- # type -t nvme 00:19:39.393 11:07:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:39.393 11:07:07 -- common/autotest_common.sh@632 -- # type -P nvme 00:19:39.393 11:07:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:39.393 11:07:07 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:19:39.393 11:07:07 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:19:39.393 11:07:07 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -a 10.0.0.2 -s 4420 00:19:39.393 [2024-04-18 11:07:07.830976] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4' 00:19:39.393 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:39.393 could not add new controller: failed to write to nvme-fabrics device 00:19:39.393 11:07:07 -- common/autotest_common.sh@641 -- # es=1 00:19:39.393 11:07:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:39.393 11:07:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:39.393 11:07:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:39.393 11:07:07 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:39.393 11:07:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.393 11:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:39.393 11:07:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.394 11:07:07 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:39.394 11:07:07 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:19:39.394 11:07:07 -- common/autotest_common.sh@1184 -- # local i=0 00:19:39.394 11:07:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:39.394 11:07:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:39.394 11:07:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:41.925 11:07:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:41.926 11:07:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:41.926 11:07:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:41.926 11:07:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:41.926 11:07:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:41.926 11:07:10 -- common/autotest_common.sh@1194 -- # return 0 00:19:41.926 11:07:10 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:41.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.926 11:07:10 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:41.926 11:07:10 -- common/autotest_common.sh@1205 -- # local i=0 00:19:41.926 11:07:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.926 11:07:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:41.926 11:07:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:41.926 11:07:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.926 11:07:10 -- common/autotest_common.sh@1217 -- # return 0 00:19:41.926 11:07:10 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:41.926 11:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.926 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:41.926 11:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.926 11:07:10 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:41.926 11:07:10 -- common/autotest_common.sh@638 -- # local es=0 00:19:41.926 11:07:10 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:41.926 11:07:10 -- common/autotest_common.sh@626 -- # local arg=nvme 00:19:41.926 11:07:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:41.926 11:07:10 -- common/autotest_common.sh@630 -- # type -t nvme 00:19:41.926 11:07:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:41.926 11:07:10 -- common/autotest_common.sh@632 -- # type -P nvme 00:19:41.926 11:07:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:41.926 11:07:10 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:19:41.926 11:07:10 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:19:41.926 11:07:10 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:41.926 [2024-04-18 11:07:10.222210] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4' 00:19:41.926 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:41.926 could not add new controller: failed to write to nvme-fabrics device 00:19:41.926 11:07:10 -- common/autotest_common.sh@641 -- # es=1 00:19:41.926 11:07:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:41.926 11:07:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:41.926 11:07:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:41.926 11:07:10 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:19:41.926 11:07:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.926 11:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:41.926 11:07:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.926 11:07:10 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:41.926 11:07:10 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:41.926 11:07:10 -- common/autotest_common.sh@1184 -- # local i=0 00:19:41.926 11:07:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:41.926 11:07:10 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:41.926 11:07:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:43.827 11:07:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:43.827 11:07:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:43.827 11:07:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:43.827 11:07:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:43.827 11:07:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:43.827 11:07:12 -- common/autotest_common.sh@1194 -- # return 0 00:19:43.827 11:07:12 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:43.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:43.827 11:07:12 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:43.827 11:07:12 -- common/autotest_common.sh@1205 -- # local i=0 00:19:43.827 11:07:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:43.827 11:07:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:44.085 11:07:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:44.085 11:07:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:44.085 11:07:12 -- common/autotest_common.sh@1217 -- # return 0 00:19:44.085 11:07:12 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.085 11:07:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.085 11:07:12 -- common/autotest_common.sh@10 -- # set +x 00:19:44.085 11:07:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.085 11:07:12 -- target/rpc.sh@81 -- # seq 1 5 00:19:44.085 11:07:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:44.085 11:07:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:44.085 11:07:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.085 11:07:12 -- common/autotest_common.sh@10 -- # set +x 00:19:44.085 11:07:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.085 11:07:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.085 11:07:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.085 11:07:12 -- common/autotest_common.sh@10 -- # set +x 00:19:44.085 [2024-04-18 11:07:12.509597] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.085 11:07:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.085 11:07:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:44.085 11:07:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.085 11:07:12 -- common/autotest_common.sh@10 -- # set +x 00:19:44.085 11:07:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.085 11:07:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:44.085 11:07:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.085 11:07:12 -- common/autotest_common.sh@10 -- # set +x 00:19:44.085 11:07:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.085 11:07:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:44.085 11:07:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:44.085 11:07:12 -- common/autotest_common.sh@1184 -- # local i=0 00:19:44.085 11:07:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:44.085 11:07:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:44.086 11:07:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:46.616 11:07:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:46.616 11:07:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:46.616 11:07:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:46.616 11:07:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:46.616 11:07:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:46.616 11:07:14 -- common/autotest_common.sh@1194 -- # return 0 00:19:46.616 11:07:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:46.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.616 11:07:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:46.616 11:07:14 -- common/autotest_common.sh@1205 -- # local i=0 00:19:46.616 11:07:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:46.616 11:07:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:46.616 11:07:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:46.616 11:07:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:46.616 11:07:14 -- common/autotest_common.sh@1217 -- # return 0 00:19:46.616 11:07:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:46.616 11:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.616 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:19:46.616 11:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.616 11:07:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.616 11:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.616 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:19:46.616 11:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.616 11:07:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:46.616 11:07:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:46.616 11:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.616 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:19:46.616 11:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.616 11:07:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.616 11:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.617 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:19:46.617 [2024-04-18 11:07:14.813141] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.617 11:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.617 11:07:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:46.617 11:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.617 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:19:46.617 11:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.617 11:07:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:46.617 11:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.617 11:07:14 -- common/autotest_common.sh@10 -- # set +x 00:19:46.617 11:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.617 11:07:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:46.617 11:07:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:46.617 11:07:14 -- common/autotest_common.sh@1184 -- # local i=0 00:19:46.617 11:07:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:46.617 11:07:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:46.617 11:07:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:48.518 11:07:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:48.518 11:07:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:48.518 11:07:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:48.518 11:07:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:48.518 11:07:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:48.518 11:07:17 -- common/autotest_common.sh@1194 -- # return 0 00:19:48.518 11:07:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:48.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:48.518 11:07:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:48.518 11:07:17 -- common/autotest_common.sh@1205 -- # local i=0 00:19:48.518 11:07:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:48.518 11:07:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:48.518 11:07:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:48.518 11:07:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:48.518 11:07:17 -- common/autotest_common.sh@1217 -- # return 0 00:19:48.518 11:07:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:48.518 11:07:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.518 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 11:07:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.518 11:07:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.518 11:07:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.518 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 11:07:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.518 11:07:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:48.518 11:07:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:48.518 11:07:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.518 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 11:07:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.518 11:07:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.518 11:07:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.518 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 [2024-04-18 11:07:17.108596] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.518 11:07:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.518 11:07:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:48.518 11:07:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.518 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 11:07:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.518 11:07:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:48.518 11:07:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.518 11:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:48.518 11:07:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.518 11:07:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:48.776 11:07:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:48.776 11:07:17 -- common/autotest_common.sh@1184 -- # local i=0 00:19:48.776 11:07:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:48.776 11:07:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:48.776 11:07:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:50.675 11:07:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:50.675 11:07:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:50.675 11:07:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:50.934 11:07:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:50.934 11:07:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:50.934 11:07:19 -- common/autotest_common.sh@1194 -- # return 0 00:19:50.934 11:07:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:50.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:50.934 11:07:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:50.934 11:07:19 -- common/autotest_common.sh@1205 -- # local i=0 00:19:50.934 11:07:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:50.934 11:07:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:50.934 11:07:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:50.934 11:07:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:50.934 11:07:19 -- common/autotest_common.sh@1217 -- # return 0 00:19:50.934 11:07:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:50.934 11:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.934 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:19:50.934 11:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.934 11:07:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.934 11:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.934 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:19:50.934 11:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.934 11:07:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:50.934 11:07:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:50.934 11:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.934 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:19:50.934 11:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.934 11:07:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.934 11:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.934 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:19:50.934 [2024-04-18 11:07:19.404180] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.934 11:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.934 11:07:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:50.934 11:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.934 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:19:50.934 11:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.934 11:07:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:50.935 11:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.935 11:07:19 -- common/autotest_common.sh@10 -- # set +x 00:19:50.935 11:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.935 11:07:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:51.193 11:07:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:51.193 11:07:19 -- common/autotest_common.sh@1184 -- # local i=0 00:19:51.193 11:07:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:51.193 11:07:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:51.193 11:07:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:53.095 11:07:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:53.095 11:07:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:53.095 11:07:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.095 11:07:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:53.095 11:07:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.095 11:07:21 -- common/autotest_common.sh@1194 -- # return 0 00:19:53.095 11:07:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:53.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.095 11:07:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:53.095 11:07:21 -- common/autotest_common.sh@1205 -- # local i=0 00:19:53.095 11:07:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:53.095 11:07:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.095 11:07:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.095 11:07:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:53.095 11:07:21 -- common/autotest_common.sh@1217 -- # return 0 00:19:53.095 11:07:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:53.095 11:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.095 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:53.095 11:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.095 11:07:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.095 11:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.095 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:53.095 11:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.095 11:07:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:53.095 11:07:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:53.095 11:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.095 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:53.095 11:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.095 11:07:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.095 11:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.095 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:53.095 [2024-04-18 11:07:21.707822] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.095 11:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.095 11:07:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:53.095 11:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.095 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:53.095 11:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.095 11:07:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:53.095 11:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.095 11:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:53.095 11:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.095 11:07:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:53.354 11:07:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:53.354 11:07:21 -- common/autotest_common.sh@1184 -- # local i=0 00:19:53.354 11:07:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:53.354 11:07:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:53.354 11:07:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:55.259 11:07:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:55.259 11:07:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:55.259 11:07:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:55.519 11:07:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:55.519 11:07:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:55.519 11:07:23 -- common/autotest_common.sh@1194 -- # return 0 00:19:55.519 11:07:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:55.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:55.519 11:07:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:55.519 11:07:24 -- common/autotest_common.sh@1205 -- # local i=0 00:19:55.519 11:07:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:55.519 11:07:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:55.519 11:07:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:55.519 11:07:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:55.519 11:07:24 -- common/autotest_common.sh@1217 -- # return 0 00:19:55.519 11:07:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@99 -- # seq 1 5 00:19:55.519 11:07:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:55.519 11:07:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 [2024-04-18 11:07:24.107256] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.519 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.519 11:07:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:55.519 11:07:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:55.519 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.519 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 [2024-04-18 11:07:24.163289] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:55.778 11:07:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 [2024-04-18 11:07:24.215366] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:55.778 11:07:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 [2024-04-18 11:07:24.263399] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:55.778 11:07:24 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 [2024-04-18 11:07:24.311458] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:55.778 11:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.778 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.778 11:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.778 11:07:24 -- target/rpc.sh@110 -- # stats='{ 00:19:55.778 "poll_groups": [ 00:19:55.778 { 00:19:55.778 "admin_qpairs": 2, 00:19:55.778 "completed_nvme_io": 65, 00:19:55.778 "current_admin_qpairs": 0, 00:19:55.778 "current_io_qpairs": 0, 00:19:55.778 "io_qpairs": 16, 00:19:55.778 "name": "nvmf_tgt_poll_group_0", 00:19:55.778 "pending_bdev_io": 0, 00:19:55.778 "transports": [ 00:19:55.778 { 00:19:55.778 "trtype": "TCP" 00:19:55.778 } 00:19:55.778 ] 00:19:55.778 }, 00:19:55.778 { 00:19:55.778 "admin_qpairs": 3, 00:19:55.778 "completed_nvme_io": 70, 00:19:55.778 "current_admin_qpairs": 0, 00:19:55.778 "current_io_qpairs": 0, 00:19:55.778 "io_qpairs": 17, 00:19:55.778 "name": "nvmf_tgt_poll_group_1", 00:19:55.778 "pending_bdev_io": 0, 00:19:55.778 "transports": [ 00:19:55.778 { 00:19:55.778 "trtype": "TCP" 00:19:55.778 } 00:19:55.778 ] 00:19:55.778 }, 00:19:55.778 { 00:19:55.778 "admin_qpairs": 1, 00:19:55.778 "completed_nvme_io": 119, 00:19:55.778 "current_admin_qpairs": 0, 00:19:55.778 "current_io_qpairs": 0, 00:19:55.778 "io_qpairs": 19, 00:19:55.778 "name": "nvmf_tgt_poll_group_2", 00:19:55.778 "pending_bdev_io": 0, 00:19:55.778 "transports": [ 00:19:55.778 { 00:19:55.778 "trtype": "TCP" 00:19:55.778 } 00:19:55.778 ] 00:19:55.778 }, 00:19:55.778 { 00:19:55.778 "admin_qpairs": 1, 00:19:55.778 "completed_nvme_io": 166, 00:19:55.778 "current_admin_qpairs": 0, 00:19:55.778 "current_io_qpairs": 0, 00:19:55.778 "io_qpairs": 18, 00:19:55.778 "name": "nvmf_tgt_poll_group_3", 00:19:55.778 "pending_bdev_io": 0, 00:19:55.778 "transports": [ 00:19:55.778 { 00:19:55.778 "trtype": "TCP" 00:19:55.778 } 00:19:55.778 ] 00:19:55.778 } 00:19:55.778 ], 00:19:55.778 "tick_rate": 2200000000 00:19:55.778 }' 00:19:55.778 11:07:24 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:55.778 11:07:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:55.778 11:07:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:55.778 11:07:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:56.037 11:07:24 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:56.037 11:07:24 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:56.037 11:07:24 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:56.037 11:07:24 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:56.037 11:07:24 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:56.037 11:07:24 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:19:56.037 11:07:24 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:56.037 11:07:24 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:56.037 11:07:24 -- target/rpc.sh@123 -- # nvmftestfini 00:19:56.037 11:07:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:56.037 11:07:24 -- nvmf/common.sh@117 -- # sync 00:19:56.037 11:07:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.037 11:07:24 -- nvmf/common.sh@120 -- # set +e 00:19:56.037 11:07:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.037 11:07:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.037 rmmod nvme_tcp 00:19:56.037 rmmod nvme_fabrics 00:19:56.037 rmmod nvme_keyring 00:19:56.037 11:07:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.037 11:07:24 -- nvmf/common.sh@124 -- # set -e 00:19:56.037 11:07:24 -- nvmf/common.sh@125 -- # return 0 00:19:56.037 11:07:24 -- nvmf/common.sh@478 -- # '[' -n 83670 ']' 00:19:56.037 11:07:24 -- nvmf/common.sh@479 -- # killprocess 83670 00:19:56.037 11:07:24 -- common/autotest_common.sh@936 -- # '[' -z 83670 ']' 00:19:56.037 11:07:24 -- common/autotest_common.sh@940 -- # kill -0 83670 00:19:56.037 11:07:24 -- common/autotest_common.sh@941 -- # uname 00:19:56.037 11:07:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.037 11:07:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83670 00:19:56.037 killing process with pid 83670 00:19:56.037 11:07:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:56.037 11:07:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:56.037 11:07:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83670' 00:19:56.037 11:07:24 -- common/autotest_common.sh@955 -- # kill 83670 00:19:56.037 11:07:24 -- common/autotest_common.sh@960 -- # wait 83670 00:19:56.294 11:07:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:56.294 11:07:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:56.294 11:07:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:56.294 11:07:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.294 11:07:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:56.295 11:07:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.295 11:07:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.295 11:07:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.295 11:07:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:56.295 00:19:56.295 real 0m18.882s 00:19:56.295 user 1m10.812s 00:19:56.295 sys 0m2.645s 00:19:56.295 11:07:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:56.295 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:56.295 ************************************ 00:19:56.295 END TEST nvmf_rpc 00:19:56.295 ************************************ 00:19:56.295 11:07:24 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:56.295 11:07:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.295 11:07:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.295 11:07:24 -- common/autotest_common.sh@10 -- # set +x 00:19:56.552 ************************************ 00:19:56.552 START TEST nvmf_invalid 00:19:56.552 ************************************ 00:19:56.552 11:07:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:56.552 * Looking for test storage... 00:19:56.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:56.553 11:07:25 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.553 11:07:25 -- nvmf/common.sh@7 -- # uname -s 00:19:56.553 11:07:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.553 11:07:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.553 11:07:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.553 11:07:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.553 11:07:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.553 11:07:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.553 11:07:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.553 11:07:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.553 11:07:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.553 11:07:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.553 11:07:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:56.553 11:07:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:19:56.553 11:07:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.553 11:07:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.553 11:07:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.553 11:07:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.553 11:07:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.553 11:07:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.553 11:07:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.553 11:07:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.553 11:07:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.553 11:07:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.553 11:07:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.553 11:07:25 -- paths/export.sh@5 -- # export PATH 00:19:56.553 11:07:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.553 11:07:25 -- nvmf/common.sh@47 -- # : 0 00:19:56.553 11:07:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.553 11:07:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.553 11:07:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.553 11:07:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.553 11:07:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.553 11:07:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.553 11:07:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.553 11:07:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.553 11:07:25 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:19:56.553 11:07:25 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:56.553 11:07:25 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:56.553 11:07:25 -- target/invalid.sh@14 -- # target=foobar 00:19:56.553 11:07:25 -- target/invalid.sh@16 -- # RANDOM=0 00:19:56.553 11:07:25 -- target/invalid.sh@34 -- # nvmftestinit 00:19:56.553 11:07:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:56.553 11:07:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.553 11:07:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:56.553 11:07:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:56.553 11:07:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:56.553 11:07:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.553 11:07:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.553 11:07:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.553 11:07:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:56.553 11:07:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:56.553 11:07:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:56.553 11:07:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:56.553 11:07:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:56.553 11:07:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:56.553 11:07:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.553 11:07:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.553 11:07:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.553 11:07:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:56.553 11:07:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.553 11:07:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.553 11:07:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.553 11:07:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.553 11:07:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.553 11:07:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.553 11:07:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.553 11:07:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.553 11:07:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:56.553 11:07:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:56.553 Cannot find device "nvmf_tgt_br" 00:19:56.553 11:07:25 -- nvmf/common.sh@155 -- # true 00:19:56.553 11:07:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.553 Cannot find device "nvmf_tgt_br2" 00:19:56.553 11:07:25 -- nvmf/common.sh@156 -- # true 00:19:56.553 11:07:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:56.553 11:07:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:56.553 Cannot find device "nvmf_tgt_br" 00:19:56.553 11:07:25 -- nvmf/common.sh@158 -- # true 00:19:56.553 11:07:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:56.553 Cannot find device "nvmf_tgt_br2" 00:19:56.553 11:07:25 -- nvmf/common.sh@159 -- # true 00:19:56.553 11:07:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:56.811 11:07:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:56.811 11:07:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.811 11:07:25 -- nvmf/common.sh@162 -- # true 00:19:56.811 11:07:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.811 11:07:25 -- nvmf/common.sh@163 -- # true 00:19:56.811 11:07:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.811 11:07:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.811 11:07:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.811 11:07:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.811 11:07:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.811 11:07:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.811 11:07:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.811 11:07:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.811 11:07:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.811 11:07:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:56.811 11:07:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:56.811 11:07:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:56.811 11:07:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:56.811 11:07:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.811 11:07:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.811 11:07:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.811 11:07:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:56.811 11:07:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:56.811 11:07:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.811 11:07:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.811 11:07:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.811 11:07:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.811 11:07:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.811 11:07:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:56.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:19:56.811 00:19:56.811 --- 10.0.0.2 ping statistics --- 00:19:56.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.811 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:56.811 11:07:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:56.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:19:56.811 00:19:56.811 --- 10.0.0.3 ping statistics --- 00:19:56.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.811 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:56.811 11:07:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:56.811 00:19:56.811 --- 10.0.0.1 ping statistics --- 00:19:56.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.811 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:57.069 11:07:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.069 11:07:25 -- nvmf/common.sh@422 -- # return 0 00:19:57.069 11:07:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:57.069 11:07:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.069 11:07:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:57.069 11:07:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:57.069 11:07:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.069 11:07:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:57.069 11:07:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:57.069 11:07:25 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:57.069 11:07:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:57.069 11:07:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:57.069 11:07:25 -- common/autotest_common.sh@10 -- # set +x 00:19:57.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.069 11:07:25 -- nvmf/common.sh@470 -- # nvmfpid=84186 00:19:57.069 11:07:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:57.069 11:07:25 -- nvmf/common.sh@471 -- # waitforlisten 84186 00:19:57.069 11:07:25 -- common/autotest_common.sh@817 -- # '[' -z 84186 ']' 00:19:57.069 11:07:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.069 11:07:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:57.069 11:07:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.069 11:07:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:57.069 11:07:25 -- common/autotest_common.sh@10 -- # set +x 00:19:57.069 [2024-04-18 11:07:25.538171] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:57.069 [2024-04-18 11:07:25.538429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.069 [2024-04-18 11:07:25.681394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.327 [2024-04-18 11:07:25.799479] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.327 [2024-04-18 11:07:25.799775] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.327 [2024-04-18 11:07:25.800360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.327 [2024-04-18 11:07:25.800540] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.327 [2024-04-18 11:07:25.800909] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.327 [2024-04-18 11:07:25.801208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.327 [2024-04-18 11:07:25.801337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.327 [2024-04-18 11:07:25.805560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.327 [2024-04-18 11:07:25.805618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.259 11:07:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.259 11:07:26 -- common/autotest_common.sh@850 -- # return 0 00:19:58.259 11:07:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:58.259 11:07:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:58.259 11:07:26 -- common/autotest_common.sh@10 -- # set +x 00:19:58.259 11:07:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.259 11:07:26 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:58.259 11:07:26 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5404 00:19:58.517 [2024-04-18 11:07:26.910785] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:58.517 11:07:26 -- target/invalid.sh@40 -- # out='2024/04/18 11:07:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5404 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:19:58.517 request: 00:19:58.517 { 00:19:58.517 "method": "nvmf_create_subsystem", 00:19:58.517 "params": { 00:19:58.517 "nqn": "nqn.2016-06.io.spdk:cnode5404", 00:19:58.517 "tgt_name": "foobar" 00:19:58.517 } 00:19:58.517 } 00:19:58.517 Got JSON-RPC error response 00:19:58.517 GoRPCClient: error on JSON-RPC call' 00:19:58.517 11:07:26 -- target/invalid.sh@41 -- # [[ 2024/04/18 11:07:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5404 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:19:58.517 request: 00:19:58.517 { 00:19:58.517 "method": "nvmf_create_subsystem", 00:19:58.517 "params": { 00:19:58.517 "nqn": "nqn.2016-06.io.spdk:cnode5404", 00:19:58.517 "tgt_name": "foobar" 00:19:58.517 } 00:19:58.517 } 00:19:58.517 Got JSON-RPC error response 00:19:58.517 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:58.517 11:07:26 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:58.517 11:07:26 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23225 00:19:58.843 [2024-04-18 11:07:27.199085] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23225: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:58.843 11:07:27 -- target/invalid.sh@45 -- # out='2024/04/18 11:07:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23225 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:19:58.843 request: 00:19:58.843 { 00:19:58.843 "method": "nvmf_create_subsystem", 00:19:58.843 "params": { 00:19:58.843 "nqn": "nqn.2016-06.io.spdk:cnode23225", 00:19:58.843 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:19:58.843 } 00:19:58.843 } 00:19:58.843 Got JSON-RPC error response 00:19:58.843 GoRPCClient: error on JSON-RPC call' 00:19:58.843 11:07:27 -- target/invalid.sh@46 -- # [[ 2024/04/18 11:07:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23225 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:19:58.843 request: 00:19:58.843 { 00:19:58.843 "method": "nvmf_create_subsystem", 00:19:58.843 "params": { 00:19:58.843 "nqn": "nqn.2016-06.io.spdk:cnode23225", 00:19:58.843 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:19:58.843 } 00:19:58.843 } 00:19:58.843 Got JSON-RPC error response 00:19:58.843 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:58.843 11:07:27 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:58.843 11:07:27 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26102 00:19:59.103 [2024-04-18 11:07:27.499368] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26102: invalid model number 'SPDK_Controller' 00:19:59.103 11:07:27 -- target/invalid.sh@50 -- # out='2024/04/18 11:07:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode26102], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:19:59.103 request: 00:19:59.103 { 00:19:59.103 "method": "nvmf_create_subsystem", 00:19:59.103 "params": { 00:19:59.103 "nqn": "nqn.2016-06.io.spdk:cnode26102", 00:19:59.103 "model_number": "SPDK_Controller\u001f" 00:19:59.103 } 00:19:59.103 } 00:19:59.103 Got JSON-RPC error response 00:19:59.103 GoRPCClient: error on JSON-RPC call' 00:19:59.103 11:07:27 -- target/invalid.sh@51 -- # [[ 2024/04/18 11:07:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode26102], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:19:59.103 request: 00:19:59.103 { 00:19:59.103 "method": "nvmf_create_subsystem", 00:19:59.103 "params": { 00:19:59.103 "nqn": "nqn.2016-06.io.spdk:cnode26102", 00:19:59.103 "model_number": "SPDK_Controller\u001f" 00:19:59.103 } 00:19:59.103 } 00:19:59.103 Got JSON-RPC error response 00:19:59.103 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:59.103 11:07:27 -- target/invalid.sh@54 -- # gen_random_s 21 00:19:59.103 11:07:27 -- target/invalid.sh@19 -- # local length=21 ll 00:19:59.103 11:07:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:59.103 11:07:27 -- target/invalid.sh@21 -- # local chars 00:19:59.103 11:07:27 -- target/invalid.sh@22 -- # local string 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 113 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=q 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 73 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=I 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 54 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x36' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=6 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 121 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=y 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 74 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=J 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 82 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=R 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 81 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x51' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=Q 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 36 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+='$' 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 55 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x37' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=7 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 69 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=E 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 58 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=: 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 79 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # string+=O 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.103 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # printf %x 122 00:19:59.103 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+=z 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 55 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x37' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+=7 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 123 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+='{' 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 51 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x33' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+=3 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 74 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+=J 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 34 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+='"' 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 94 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+='^' 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 62 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+='>' 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # printf %x 80 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:59.104 11:07:27 -- target/invalid.sh@25 -- # string+=P 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.104 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.104 11:07:27 -- target/invalid.sh@28 -- # [[ q == \- ]] 00:19:59.104 11:07:27 -- target/invalid.sh@31 -- # echo 'qI6yJRQ$7E:Oz7{3J"^>P' 00:19:59.104 11:07:27 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'qI6yJRQ$7E:Oz7{3J"^>P' nqn.2016-06.io.spdk:cnode12556 00:19:59.362 [2024-04-18 11:07:27.935707] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12556: invalid serial number 'qI6yJRQ$7E:Oz7{3J"^>P' 00:19:59.362 11:07:27 -- target/invalid.sh@54 -- # out='2024/04/18 11:07:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12556 serial_number:qI6yJRQ$7E:Oz7{3J"^>P], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN qI6yJRQ$7E:Oz7{3J"^>P 00:19:59.362 request: 00:19:59.362 { 00:19:59.362 "method": "nvmf_create_subsystem", 00:19:59.362 "params": { 00:19:59.362 "nqn": "nqn.2016-06.io.spdk:cnode12556", 00:19:59.362 "serial_number": "qI6yJRQ$7E:Oz7{3J\"^>P" 00:19:59.362 } 00:19:59.362 } 00:19:59.362 Got JSON-RPC error response 00:19:59.362 GoRPCClient: error on JSON-RPC call' 00:19:59.362 11:07:27 -- target/invalid.sh@55 -- # [[ 2024/04/18 11:07:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12556 serial_number:qI6yJRQ$7E:Oz7{3J"^>P], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN qI6yJRQ$7E:Oz7{3J"^>P 00:19:59.362 request: 00:19:59.362 { 00:19:59.362 "method": "nvmf_create_subsystem", 00:19:59.362 "params": { 00:19:59.362 "nqn": "nqn.2016-06.io.spdk:cnode12556", 00:19:59.362 "serial_number": "qI6yJRQ$7E:Oz7{3J\"^>P" 00:19:59.362 } 00:19:59.362 } 00:19:59.362 Got JSON-RPC error response 00:19:59.362 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:59.362 11:07:27 -- target/invalid.sh@58 -- # gen_random_s 41 00:19:59.362 11:07:27 -- target/invalid.sh@19 -- # local length=41 ll 00:19:59.362 11:07:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:59.362 11:07:27 -- target/invalid.sh@21 -- # local chars 00:19:59.362 11:07:27 -- target/invalid.sh@22 -- # local string 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 50 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # string+=2 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 99 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # string+=c 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 78 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # string+=N 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 50 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # string+=2 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 114 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x72' 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # string+=r 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 69 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # string+=E 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 69 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # string+=E 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.362 11:07:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.362 11:07:27 -- target/invalid.sh@25 -- # printf %x 61 00:19:59.362 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # string+== 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # printf %x 37 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x25' 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # string+=% 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # printf %x 125 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # string+='}' 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # printf %x 60 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # string+='<' 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # printf %x 48 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # string+=0 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.621 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # printf %x 80 00:19:59.621 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=P 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 62 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+='>' 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 121 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=y 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 93 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=']' 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 75 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=K 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 40 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x28' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+='(' 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 79 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=O 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 105 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x69' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=i 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 70 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=F 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 77 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=M 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 121 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=y 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 121 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=y 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 124 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+='|' 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 43 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=+ 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 58 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=: 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 75 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=K 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 76 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=L 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 49 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=1 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 71 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=G 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 44 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=, 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 63 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+='?' 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 46 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=. 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 127 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=$'\177' 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 50 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=2 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 73 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=I 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 82 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=R 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 72 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=H 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 114 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x72' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=r 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # printf %x 50 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:59.622 11:07:28 -- target/invalid.sh@25 -- # string+=2 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:59.622 11:07:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:59.622 11:07:28 -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:19:59.622 11:07:28 -- target/invalid.sh@31 -- # echo '2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.2IRHr2' 00:19:59.622 11:07:28 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.2IRHr2' nqn.2016-06.io.spdk:cnode25395 00:19:59.881 [2024-04-18 11:07:28.420128] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25395: invalid model number '2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.2IRHr2' 00:19:59.881 11:07:28 -- target/invalid.sh@58 -- # out='2024/04/18 11:07:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.2IRHr2 nqn:nqn.2016-06.io.spdk:cnode25395], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.2IRHr2 00:19:59.881 request: 00:19:59.881 { 00:19:59.881 "method": "nvmf_create_subsystem", 00:19:59.881 "params": { 00:19:59.881 "nqn": "nqn.2016-06.io.spdk:cnode25395", 00:19:59.881 "model_number": "2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.\u007f2IRHr2" 00:19:59.881 } 00:19:59.881 } 00:19:59.881 Got JSON-RPC error response 00:19:59.881 GoRPCClient: error on JSON-RPC call' 00:19:59.881 11:07:28 -- target/invalid.sh@59 -- # [[ 2024/04/18 11:07:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.2IRHr2 nqn:nqn.2016-06.io.spdk:cnode25395], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.2IRHr2 00:19:59.881 request: 00:19:59.881 { 00:19:59.881 "method": "nvmf_create_subsystem", 00:19:59.881 "params": { 00:19:59.881 "nqn": "nqn.2016-06.io.spdk:cnode25395", 00:19:59.881 "model_number": "2cN2rEE=%}<0P>y]K(OiFMyy|+:KL1G,?.\u007f2IRHr2" 00:19:59.881 } 00:19:59.881 } 00:19:59.881 Got JSON-RPC error response 00:19:59.881 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:59.881 11:07:28 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:20:00.140 [2024-04-18 11:07:28.708431] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.140 11:07:28 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:20:00.399 11:07:29 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:20:00.657 11:07:29 -- target/invalid.sh@67 -- # echo '' 00:20:00.657 11:07:29 -- target/invalid.sh@67 -- # head -n 1 00:20:00.657 11:07:29 -- target/invalid.sh@67 -- # IP= 00:20:00.657 11:07:29 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:20:00.657 [2024-04-18 11:07:29.251551] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:20:00.657 11:07:29 -- target/invalid.sh@69 -- # out='2024/04/18 11:07:29 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:20:00.657 request: 00:20:00.657 { 00:20:00.657 "method": "nvmf_subsystem_remove_listener", 00:20:00.657 "params": { 00:20:00.657 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:00.657 "listen_address": { 00:20:00.657 "trtype": "tcp", 00:20:00.657 "traddr": "", 00:20:00.657 "trsvcid": "4421" 00:20:00.657 } 00:20:00.657 } 00:20:00.657 } 00:20:00.657 Got JSON-RPC error response 00:20:00.657 GoRPCClient: error on JSON-RPC call' 00:20:00.657 11:07:29 -- target/invalid.sh@70 -- # [[ 2024/04/18 11:07:29 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:20:00.657 request: 00:20:00.657 { 00:20:00.657 "method": "nvmf_subsystem_remove_listener", 00:20:00.657 "params": { 00:20:00.657 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:00.657 "listen_address": { 00:20:00.657 "trtype": "tcp", 00:20:00.657 "traddr": "", 00:20:00.657 "trsvcid": "4421" 00:20:00.657 } 00:20:00.657 } 00:20:00.657 } 00:20:00.657 Got JSON-RPC error response 00:20:00.657 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:20:00.657 11:07:29 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9247 -i 0 00:20:00.915 [2024-04-18 11:07:29.491656] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9247: invalid cntlid range [0-65519] 00:20:00.915 11:07:29 -- target/invalid.sh@73 -- # out='2024/04/18 11:07:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9247], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:20:00.915 request: 00:20:00.916 { 00:20:00.916 "method": "nvmf_create_subsystem", 00:20:00.916 "params": { 00:20:00.916 "nqn": "nqn.2016-06.io.spdk:cnode9247", 00:20:00.916 "min_cntlid": 0 00:20:00.916 } 00:20:00.916 } 00:20:00.916 Got JSON-RPC error response 00:20:00.916 GoRPCClient: error on JSON-RPC call' 00:20:00.916 11:07:29 -- target/invalid.sh@74 -- # [[ 2024/04/18 11:07:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9247], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:20:00.916 request: 00:20:00.916 { 00:20:00.916 "method": "nvmf_create_subsystem", 00:20:00.916 "params": { 00:20:00.916 "nqn": "nqn.2016-06.io.spdk:cnode9247", 00:20:00.916 "min_cntlid": 0 00:20:00.916 } 00:20:00.916 } 00:20:00.916 Got JSON-RPC error response 00:20:00.916 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:00.916 11:07:29 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4274 -i 65520 00:20:01.174 [2024-04-18 11:07:29.791917] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4274: invalid cntlid range [65520-65519] 00:20:01.433 11:07:29 -- target/invalid.sh@75 -- # out='2024/04/18 11:07:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4274], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:20:01.433 request: 00:20:01.433 { 00:20:01.433 "method": "nvmf_create_subsystem", 00:20:01.433 "params": { 00:20:01.433 "nqn": "nqn.2016-06.io.spdk:cnode4274", 00:20:01.433 "min_cntlid": 65520 00:20:01.433 } 00:20:01.433 } 00:20:01.433 Got JSON-RPC error response 00:20:01.433 GoRPCClient: error on JSON-RPC call' 00:20:01.433 11:07:29 -- target/invalid.sh@76 -- # [[ 2024/04/18 11:07:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4274], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:20:01.433 request: 00:20:01.433 { 00:20:01.433 "method": "nvmf_create_subsystem", 00:20:01.433 "params": { 00:20:01.433 "nqn": "nqn.2016-06.io.spdk:cnode4274", 00:20:01.433 "min_cntlid": 65520 00:20:01.433 } 00:20:01.433 } 00:20:01.433 Got JSON-RPC error response 00:20:01.433 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:01.433 11:07:29 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12762 -I 0 00:20:01.691 [2024-04-18 11:07:30.088190] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12762: invalid cntlid range [1-0] 00:20:01.691 11:07:30 -- target/invalid.sh@77 -- # out='2024/04/18 11:07:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12762], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:20:01.691 request: 00:20:01.691 { 00:20:01.692 "method": "nvmf_create_subsystem", 00:20:01.692 "params": { 00:20:01.692 "nqn": "nqn.2016-06.io.spdk:cnode12762", 00:20:01.692 "max_cntlid": 0 00:20:01.692 } 00:20:01.692 } 00:20:01.692 Got JSON-RPC error response 00:20:01.692 GoRPCClient: error on JSON-RPC call' 00:20:01.692 11:07:30 -- target/invalid.sh@78 -- # [[ 2024/04/18 11:07:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12762], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:20:01.692 request: 00:20:01.692 { 00:20:01.692 "method": "nvmf_create_subsystem", 00:20:01.692 "params": { 00:20:01.692 "nqn": "nqn.2016-06.io.spdk:cnode12762", 00:20:01.692 "max_cntlid": 0 00:20:01.692 } 00:20:01.692 } 00:20:01.692 Got JSON-RPC error response 00:20:01.692 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:01.692 11:07:30 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5894 -I 65520 00:20:01.950 [2024-04-18 11:07:30.364441] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5894: invalid cntlid range [1-65520] 00:20:01.950 11:07:30 -- target/invalid.sh@79 -- # out='2024/04/18 11:07:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5894], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:20:01.950 request: 00:20:01.950 { 00:20:01.950 "method": "nvmf_create_subsystem", 00:20:01.950 "params": { 00:20:01.950 "nqn": "nqn.2016-06.io.spdk:cnode5894", 00:20:01.950 "max_cntlid": 65520 00:20:01.950 } 00:20:01.950 } 00:20:01.950 Got JSON-RPC error response 00:20:01.950 GoRPCClient: error on JSON-RPC call' 00:20:01.950 11:07:30 -- target/invalid.sh@80 -- # [[ 2024/04/18 11:07:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5894], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:20:01.950 request: 00:20:01.950 { 00:20:01.950 "method": "nvmf_create_subsystem", 00:20:01.950 "params": { 00:20:01.950 "nqn": "nqn.2016-06.io.spdk:cnode5894", 00:20:01.950 "max_cntlid": 65520 00:20:01.950 } 00:20:01.950 } 00:20:01.950 Got JSON-RPC error response 00:20:01.950 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:01.950 11:07:30 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16549 -i 6 -I 5 00:20:02.208 [2024-04-18 11:07:30.628684] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16549: invalid cntlid range [6-5] 00:20:02.208 11:07:30 -- target/invalid.sh@83 -- # out='2024/04/18 11:07:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16549], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:20:02.208 request: 00:20:02.208 { 00:20:02.208 "method": "nvmf_create_subsystem", 00:20:02.208 "params": { 00:20:02.208 "nqn": "nqn.2016-06.io.spdk:cnode16549", 00:20:02.208 "min_cntlid": 6, 00:20:02.208 "max_cntlid": 5 00:20:02.208 } 00:20:02.208 } 00:20:02.208 Got JSON-RPC error response 00:20:02.208 GoRPCClient: error on JSON-RPC call' 00:20:02.208 11:07:30 -- target/invalid.sh@84 -- # [[ 2024/04/18 11:07:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16549], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:20:02.208 request: 00:20:02.208 { 00:20:02.208 "method": "nvmf_create_subsystem", 00:20:02.208 "params": { 00:20:02.208 "nqn": "nqn.2016-06.io.spdk:cnode16549", 00:20:02.208 "min_cntlid": 6, 00:20:02.208 "max_cntlid": 5 00:20:02.208 } 00:20:02.208 } 00:20:02.208 Got JSON-RPC error response 00:20:02.208 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:02.208 11:07:30 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:20:02.208 11:07:30 -- target/invalid.sh@87 -- # out='request: 00:20:02.208 { 00:20:02.208 "name": "foobar", 00:20:02.208 "method": "nvmf_delete_target", 00:20:02.208 "req_id": 1 00:20:02.208 } 00:20:02.208 Got JSON-RPC error response 00:20:02.208 response: 00:20:02.208 { 00:20:02.208 "code": -32602, 00:20:02.208 "message": "The specified target doesn'\''t exist, cannot delete it." 00:20:02.208 }' 00:20:02.208 11:07:30 -- target/invalid.sh@88 -- # [[ request: 00:20:02.208 { 00:20:02.208 "name": "foobar", 00:20:02.208 "method": "nvmf_delete_target", 00:20:02.208 "req_id": 1 00:20:02.208 } 00:20:02.208 Got JSON-RPC error response 00:20:02.208 response: 00:20:02.208 { 00:20:02.208 "code": -32602, 00:20:02.208 "message": "The specified target doesn't exist, cannot delete it." 00:20:02.208 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:20:02.208 11:07:30 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:02.208 11:07:30 -- target/invalid.sh@91 -- # nvmftestfini 00:20:02.208 11:07:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:02.208 11:07:30 -- nvmf/common.sh@117 -- # sync 00:20:02.208 11:07:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.208 11:07:30 -- nvmf/common.sh@120 -- # set +e 00:20:02.208 11:07:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.208 11:07:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.208 rmmod nvme_tcp 00:20:02.208 rmmod nvme_fabrics 00:20:02.467 rmmod nvme_keyring 00:20:02.467 11:07:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.467 11:07:30 -- nvmf/common.sh@124 -- # set -e 00:20:02.467 11:07:30 -- nvmf/common.sh@125 -- # return 0 00:20:02.467 11:07:30 -- nvmf/common.sh@478 -- # '[' -n 84186 ']' 00:20:02.467 11:07:30 -- nvmf/common.sh@479 -- # killprocess 84186 00:20:02.467 11:07:30 -- common/autotest_common.sh@936 -- # '[' -z 84186 ']' 00:20:02.467 11:07:30 -- common/autotest_common.sh@940 -- # kill -0 84186 00:20:02.467 11:07:30 -- common/autotest_common.sh@941 -- # uname 00:20:02.467 11:07:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.467 11:07:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84186 00:20:02.467 11:07:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:02.467 killing process with pid 84186 00:20:02.467 11:07:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:02.467 11:07:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84186' 00:20:02.467 11:07:30 -- common/autotest_common.sh@955 -- # kill 84186 00:20:02.467 11:07:30 -- common/autotest_common.sh@960 -- # wait 84186 00:20:02.467 11:07:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:02.467 11:07:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:02.467 11:07:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:02.467 11:07:31 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.467 11:07:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.467 11:07:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.467 11:07:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.467 11:07:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.726 11:07:31 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:02.726 ************************************ 00:20:02.726 END TEST nvmf_invalid 00:20:02.726 ************************************ 00:20:02.726 00:20:02.726 real 0m6.149s 00:20:02.726 user 0m24.709s 00:20:02.726 sys 0m1.353s 00:20:02.726 11:07:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:02.726 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:20:02.726 11:07:31 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:20:02.726 11:07:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:02.726 11:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.726 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:20:02.726 ************************************ 00:20:02.726 START TEST nvmf_abort 00:20:02.726 ************************************ 00:20:02.726 11:07:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:20:02.726 * Looking for test storage... 00:20:02.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:02.726 11:07:31 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:02.726 11:07:31 -- nvmf/common.sh@7 -- # uname -s 00:20:02.726 11:07:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.726 11:07:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.726 11:07:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.726 11:07:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.726 11:07:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.726 11:07:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.726 11:07:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.726 11:07:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.726 11:07:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.726 11:07:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.726 11:07:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:02.726 11:07:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:02.726 11:07:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.726 11:07:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.726 11:07:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:02.726 11:07:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.726 11:07:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:02.726 11:07:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.726 11:07:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.726 11:07:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.726 11:07:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.726 11:07:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.726 11:07:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.726 11:07:31 -- paths/export.sh@5 -- # export PATH 00:20:02.726 11:07:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.726 11:07:31 -- nvmf/common.sh@47 -- # : 0 00:20:02.726 11:07:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.726 11:07:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.726 11:07:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.726 11:07:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.726 11:07:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.726 11:07:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.726 11:07:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.726 11:07:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.726 11:07:31 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.726 11:07:31 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:20:02.726 11:07:31 -- target/abort.sh@14 -- # nvmftestinit 00:20:02.726 11:07:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:02.726 11:07:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.726 11:07:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:02.726 11:07:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:02.726 11:07:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:02.726 11:07:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.726 11:07:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.726 11:07:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.726 11:07:31 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:02.726 11:07:31 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:02.726 11:07:31 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:02.726 11:07:31 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:02.726 11:07:31 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:02.726 11:07:31 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:02.726 11:07:31 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.726 11:07:31 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.726 11:07:31 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:02.726 11:07:31 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:02.726 11:07:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:02.726 11:07:31 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:02.726 11:07:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:02.726 11:07:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.726 11:07:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:02.726 11:07:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:02.726 11:07:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:02.726 11:07:31 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:02.726 11:07:31 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:02.985 11:07:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:02.985 Cannot find device "nvmf_tgt_br" 00:20:02.985 11:07:31 -- nvmf/common.sh@155 -- # true 00:20:02.985 11:07:31 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.985 Cannot find device "nvmf_tgt_br2" 00:20:02.985 11:07:31 -- nvmf/common.sh@156 -- # true 00:20:02.985 11:07:31 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:02.985 11:07:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:02.985 Cannot find device "nvmf_tgt_br" 00:20:02.985 11:07:31 -- nvmf/common.sh@158 -- # true 00:20:02.985 11:07:31 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:02.985 Cannot find device "nvmf_tgt_br2" 00:20:02.985 11:07:31 -- nvmf/common.sh@159 -- # true 00:20:02.985 11:07:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:02.985 11:07:31 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:02.985 11:07:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.985 11:07:31 -- nvmf/common.sh@162 -- # true 00:20:02.985 11:07:31 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.985 11:07:31 -- nvmf/common.sh@163 -- # true 00:20:02.985 11:07:31 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.985 11:07:31 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:02.985 11:07:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:02.985 11:07:31 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:02.985 11:07:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:02.985 11:07:31 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:02.985 11:07:31 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:02.985 11:07:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:02.985 11:07:31 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:02.985 11:07:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:02.985 11:07:31 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:02.985 11:07:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:02.985 11:07:31 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:02.985 11:07:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:02.985 11:07:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:02.985 11:07:31 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:02.985 11:07:31 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:02.985 11:07:31 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:03.244 11:07:31 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.244 11:07:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.244 11:07:31 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.244 11:07:31 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.244 11:07:31 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.244 11:07:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:03.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:20:03.244 00:20:03.244 --- 10.0.0.2 ping statistics --- 00:20:03.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.244 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:03.244 11:07:31 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:03.244 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.244 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:03.244 00:20:03.244 --- 10.0.0.3 ping statistics --- 00:20:03.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.244 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:03.244 11:07:31 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:03.244 00:20:03.244 --- 10.0.0.1 ping statistics --- 00:20:03.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.244 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:03.244 11:07:31 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.244 11:07:31 -- nvmf/common.sh@422 -- # return 0 00:20:03.244 11:07:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:03.244 11:07:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.244 11:07:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:03.244 11:07:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:03.244 11:07:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.244 11:07:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:03.244 11:07:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:03.244 11:07:31 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:20:03.244 11:07:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:03.244 11:07:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:03.244 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:20:03.244 11:07:31 -- nvmf/common.sh@470 -- # nvmfpid=84704 00:20:03.244 11:07:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:03.244 11:07:31 -- nvmf/common.sh@471 -- # waitforlisten 84704 00:20:03.244 11:07:31 -- common/autotest_common.sh@817 -- # '[' -z 84704 ']' 00:20:03.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.244 11:07:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.244 11:07:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.244 11:07:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.244 11:07:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.244 11:07:31 -- common/autotest_common.sh@10 -- # set +x 00:20:03.244 [2024-04-18 11:07:31.780561] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:03.244 [2024-04-18 11:07:31.780925] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.503 [2024-04-18 11:07:31.928304] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.503 [2024-04-18 11:07:32.026120] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.503 [2024-04-18 11:07:32.026474] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.503 [2024-04-18 11:07:32.026756] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.503 [2024-04-18 11:07:32.026945] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.503 [2024-04-18 11:07:32.027106] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.503 [2024-04-18 11:07:32.027490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.503 [2024-04-18 11:07:32.027583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.503 [2024-04-18 11:07:32.027591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.438 11:07:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.438 11:07:32 -- common/autotest_common.sh@850 -- # return 0 00:20:04.438 11:07:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:04.438 11:07:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 11:07:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.438 11:07:32 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:20:04.438 11:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 [2024-04-18 11:07:32.850303] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.438 11:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.438 11:07:32 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:20:04.438 11:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 Malloc0 00:20:04.438 11:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.438 11:07:32 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:04.438 11:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 Delay0 00:20:04.438 11:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.438 11:07:32 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:04.438 11:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 11:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.438 11:07:32 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:20:04.438 11:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 11:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.438 11:07:32 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:04.438 11:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 [2024-04-18 11:07:32.933372] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.438 11:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.438 11:07:32 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:04.438 11:07:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.438 11:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.438 11:07:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.438 11:07:32 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:20:04.695 [2024-04-18 11:07:33.142096] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:06.591 Initializing NVMe Controllers 00:20:06.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:06.591 controller IO queue size 128 less than required 00:20:06.591 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:20:06.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:06.591 Initialization complete. Launching workers. 00:20:06.591 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31498 00:20:06.591 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31559, failed to submit 62 00:20:06.591 success 31502, unsuccess 57, failed 0 00:20:06.591 11:07:35 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:06.591 11:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.591 11:07:35 -- common/autotest_common.sh@10 -- # set +x 00:20:06.591 11:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.591 11:07:35 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:06.591 11:07:35 -- target/abort.sh@38 -- # nvmftestfini 00:20:06.591 11:07:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:06.591 11:07:35 -- nvmf/common.sh@117 -- # sync 00:20:06.849 11:07:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.849 11:07:35 -- nvmf/common.sh@120 -- # set +e 00:20:06.849 11:07:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.849 11:07:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.849 rmmod nvme_tcp 00:20:06.849 rmmod nvme_fabrics 00:20:06.849 rmmod nvme_keyring 00:20:06.849 11:07:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.849 11:07:35 -- nvmf/common.sh@124 -- # set -e 00:20:06.849 11:07:35 -- nvmf/common.sh@125 -- # return 0 00:20:06.849 11:07:35 -- nvmf/common.sh@478 -- # '[' -n 84704 ']' 00:20:06.849 11:07:35 -- nvmf/common.sh@479 -- # killprocess 84704 00:20:06.849 11:07:35 -- common/autotest_common.sh@936 -- # '[' -z 84704 ']' 00:20:06.849 11:07:35 -- common/autotest_common.sh@940 -- # kill -0 84704 00:20:06.849 11:07:35 -- common/autotest_common.sh@941 -- # uname 00:20:06.849 11:07:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.849 11:07:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84704 00:20:06.849 killing process with pid 84704 00:20:06.849 11:07:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:06.849 11:07:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:06.849 11:07:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84704' 00:20:06.849 11:07:35 -- common/autotest_common.sh@955 -- # kill 84704 00:20:06.849 11:07:35 -- common/autotest_common.sh@960 -- # wait 84704 00:20:07.108 11:07:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:07.108 11:07:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:07.108 11:07:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:07.108 11:07:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.108 11:07:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.108 11:07:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.108 11:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.108 11:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.108 11:07:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:07.108 00:20:07.108 real 0m4.352s 00:20:07.108 user 0m12.568s 00:20:07.108 sys 0m0.989s 00:20:07.108 11:07:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:07.108 ************************************ 00:20:07.108 11:07:35 -- common/autotest_common.sh@10 -- # set +x 00:20:07.108 END TEST nvmf_abort 00:20:07.108 ************************************ 00:20:07.108 11:07:35 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:20:07.108 11:07:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:07.108 11:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:07.108 11:07:35 -- common/autotest_common.sh@10 -- # set +x 00:20:07.108 ************************************ 00:20:07.108 START TEST nvmf_ns_hotplug_stress 00:20:07.108 ************************************ 00:20:07.108 11:07:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:20:07.366 * Looking for test storage... 00:20:07.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:07.366 11:07:35 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.366 11:07:35 -- nvmf/common.sh@7 -- # uname -s 00:20:07.366 11:07:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.366 11:07:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.366 11:07:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.366 11:07:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.366 11:07:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.366 11:07:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.366 11:07:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.366 11:07:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.366 11:07:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.366 11:07:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.366 11:07:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:07.366 11:07:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:07.366 11:07:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.366 11:07:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.366 11:07:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.366 11:07:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.366 11:07:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.366 11:07:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.366 11:07:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.366 11:07:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.366 11:07:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.367 11:07:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.367 11:07:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.367 11:07:35 -- paths/export.sh@5 -- # export PATH 00:20:07.367 11:07:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.367 11:07:35 -- nvmf/common.sh@47 -- # : 0 00:20:07.367 11:07:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.367 11:07:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.367 11:07:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.367 11:07:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.367 11:07:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.367 11:07:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.367 11:07:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.367 11:07:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.367 11:07:35 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:07.367 11:07:35 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:20:07.367 11:07:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:07.367 11:07:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.367 11:07:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:07.367 11:07:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:07.367 11:07:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:07.367 11:07:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.367 11:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.367 11:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.367 11:07:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:07.367 11:07:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:07.367 11:07:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:07.367 11:07:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:07.367 11:07:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:07.367 11:07:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:07.367 11:07:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.367 11:07:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.367 11:07:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:07.367 11:07:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:07.367 11:07:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.367 11:07:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.367 11:07:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.367 11:07:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.367 11:07:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.367 11:07:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.367 11:07:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.367 11:07:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.367 11:07:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:07.367 11:07:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:07.367 Cannot find device "nvmf_tgt_br" 00:20:07.367 11:07:35 -- nvmf/common.sh@155 -- # true 00:20:07.367 11:07:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.367 Cannot find device "nvmf_tgt_br2" 00:20:07.367 11:07:35 -- nvmf/common.sh@156 -- # true 00:20:07.367 11:07:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:07.367 11:07:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:07.367 Cannot find device "nvmf_tgt_br" 00:20:07.367 11:07:35 -- nvmf/common.sh@158 -- # true 00:20:07.367 11:07:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:07.367 Cannot find device "nvmf_tgt_br2" 00:20:07.367 11:07:35 -- nvmf/common.sh@159 -- # true 00:20:07.367 11:07:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:07.367 11:07:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:07.367 11:07:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.367 11:07:35 -- nvmf/common.sh@162 -- # true 00:20:07.367 11:07:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.367 11:07:35 -- nvmf/common.sh@163 -- # true 00:20:07.367 11:07:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.367 11:07:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.367 11:07:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.367 11:07:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.367 11:07:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.367 11:07:35 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.628 11:07:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.628 11:07:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:07.628 11:07:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:07.628 11:07:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:07.628 11:07:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:07.628 11:07:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:07.628 11:07:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:07.628 11:07:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.628 11:07:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.628 11:07:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.628 11:07:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:07.628 11:07:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:07.628 11:07:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.628 11:07:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.628 11:07:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.628 11:07:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.628 11:07:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.628 11:07:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:07.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:20:07.628 00:20:07.628 --- 10.0.0.2 ping statistics --- 00:20:07.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.628 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:07.628 11:07:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:07.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:07.628 00:20:07.628 --- 10.0.0.3 ping statistics --- 00:20:07.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.628 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:07.628 11:07:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:07.628 00:20:07.628 --- 10.0.0.1 ping statistics --- 00:20:07.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.628 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:07.628 11:07:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.628 11:07:36 -- nvmf/common.sh@422 -- # return 0 00:20:07.628 11:07:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:07.628 11:07:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.628 11:07:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:07.628 11:07:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:07.628 11:07:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.628 11:07:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:07.628 11:07:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:07.628 11:07:36 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:20:07.628 11:07:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:07.628 11:07:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:07.628 11:07:36 -- common/autotest_common.sh@10 -- # set +x 00:20:07.628 11:07:36 -- nvmf/common.sh@470 -- # nvmfpid=84981 00:20:07.628 11:07:36 -- nvmf/common.sh@471 -- # waitforlisten 84981 00:20:07.628 11:07:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:07.628 11:07:36 -- common/autotest_common.sh@817 -- # '[' -z 84981 ']' 00:20:07.628 11:07:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.628 11:07:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.628 11:07:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.628 11:07:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.628 11:07:36 -- common/autotest_common.sh@10 -- # set +x 00:20:07.628 [2024-04-18 11:07:36.224908] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:07.628 [2024-04-18 11:07:36.225008] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.888 [2024-04-18 11:07:36.369152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.888 [2024-04-18 11:07:36.458726] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.888 [2024-04-18 11:07:36.459052] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.888 [2024-04-18 11:07:36.459290] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.888 [2024-04-18 11:07:36.459442] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.888 [2024-04-18 11:07:36.459648] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.888 [2024-04-18 11:07:36.459797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.888 [2024-04-18 11:07:36.460256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.888 [2024-04-18 11:07:36.460267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.146 11:07:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:08.146 11:07:36 -- common/autotest_common.sh@850 -- # return 0 00:20:08.146 11:07:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:08.146 11:07:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:08.146 11:07:36 -- common/autotest_common.sh@10 -- # set +x 00:20:08.146 11:07:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.146 11:07:36 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:20:08.146 11:07:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:08.404 [2024-04-18 11:07:36.838832] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.404 11:07:36 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:08.674 11:07:37 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.931 [2024-04-18 11:07:37.317258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.931 11:07:37 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:08.931 11:07:37 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:20:09.496 Malloc0 00:20:09.496 11:07:37 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:09.496 Delay0 00:20:09.754 11:07:38 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:10.013 11:07:38 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:20:10.013 NULL1 00:20:10.271 11:07:38 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:10.271 11:07:38 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=85093 00:20:10.271 11:07:38 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:20:10.271 11:07:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:10.271 11:07:38 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:11.644 Read completed with error (sct=0, sc=11) 00:20:11.644 11:07:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:11.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:11.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:11.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:11.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:11.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:11.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:11.902 11:07:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:20:11.902 11:07:40 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:20:12.161 true 00:20:12.161 11:07:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:12.161 11:07:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:13.094 11:07:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:13.094 11:07:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:20:13.094 11:07:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:20:13.351 true 00:20:13.351 11:07:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:13.351 11:07:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:13.609 11:07:42 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:13.866 11:07:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:20:13.866 11:07:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:20:14.123 true 00:20:14.123 11:07:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:14.123 11:07:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:15.055 11:07:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:15.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:15.313 11:07:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:20:15.313 11:07:43 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:20:15.313 true 00:20:15.313 11:07:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:15.313 11:07:43 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:15.571 11:07:44 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:15.829 11:07:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:20:15.829 11:07:44 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:20:16.128 true 00:20:16.128 11:07:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:16.128 11:07:44 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.059 11:07:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:17.059 11:07:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:20:17.059 11:07:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:20:17.316 true 00:20:17.316 11:07:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:17.316 11:07:45 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.573 11:07:46 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:18.139 11:07:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:20:18.139 11:07:46 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:20:18.139 true 00:20:18.139 11:07:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:18.139 11:07:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:18.397 11:07:46 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:18.655 11:07:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:20:18.655 11:07:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:20:18.912 true 00:20:18.912 11:07:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:18.912 11:07:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:19.845 11:07:48 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:20.102 11:07:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:20:20.102 11:07:48 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:20:20.359 true 00:20:20.359 11:07:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:20.359 11:07:48 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.617 11:07:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:20.875 11:07:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:20:20.875 11:07:49 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:20:21.132 true 00:20:21.132 11:07:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:21.132 11:07:49 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.390 11:07:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:21.648 11:07:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:20:21.648 11:07:50 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:20:21.906 true 00:20:21.906 11:07:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:21.906 11:07:50 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:22.842 11:07:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:23.359 11:07:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:20:23.359 11:07:51 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:20:23.618 true 00:20:23.618 11:07:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:23.618 11:07:52 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:24.185 11:07:52 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:24.443 11:07:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:20:24.443 11:07:53 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:20:24.701 true 00:20:24.701 11:07:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:24.701 11:07:53 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:24.959 11:07:53 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:25.217 11:07:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:20:25.217 11:07:53 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:20:25.475 true 00:20:25.475 11:07:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:25.475 11:07:54 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:25.732 11:07:54 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:25.990 11:07:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:20:25.990 11:07:54 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:20:26.248 true 00:20:26.248 11:07:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:26.248 11:07:54 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:27.193 11:07:55 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:27.475 11:07:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:20:27.475 11:07:56 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:20:27.733 true 00:20:27.733 11:07:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:27.733 11:07:56 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:27.990 11:07:56 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:28.555 11:07:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:20:28.556 11:07:56 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:20:28.556 true 00:20:28.556 11:07:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:28.556 11:07:57 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:29.121 11:07:57 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:29.379 11:07:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:20:29.379 11:07:57 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:20:29.379 true 00:20:29.379 11:07:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:29.379 11:07:58 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:30.313 11:07:58 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:30.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:30.572 11:07:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:20:30.572 11:07:59 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:20:30.830 true 00:20:30.830 11:07:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:30.830 11:07:59 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:31.087 11:07:59 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:31.345 11:07:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:20:31.345 11:07:59 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:20:31.603 true 00:20:31.603 11:08:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:31.603 11:08:00 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:32.222 11:08:00 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:32.480 11:08:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:20:32.480 11:08:01 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:20:33.046 true 00:20:33.046 11:08:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:33.046 11:08:01 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:33.046 11:08:01 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:33.362 11:08:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:20:33.362 11:08:01 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:20:33.620 true 00:20:33.620 11:08:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:33.620 11:08:02 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:33.879 11:08:02 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:34.137 11:08:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:20:34.137 11:08:02 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:20:34.137 true 00:20:34.394 11:08:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:34.394 11:08:02 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:35.341 11:08:03 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:35.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:35.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:35.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:35.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:35.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:35.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:35.599 11:08:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:20:35.599 11:08:04 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:20:35.857 true 00:20:35.857 11:08:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:35.857 11:08:04 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:36.789 11:08:05 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:36.789 11:08:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:20:36.789 11:08:05 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:20:37.047 true 00:20:37.047 11:08:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:37.047 11:08:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:37.306 11:08:05 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:37.564 11:08:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:20:37.564 11:08:06 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:20:37.823 true 00:20:37.823 11:08:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:37.823 11:08:06 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:38.758 11:08:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:39.016 11:08:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:20:39.016 11:08:07 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:20:39.275 true 00:20:39.275 11:08:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:39.275 11:08:07 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:39.533 11:08:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:39.533 11:08:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:20:39.533 11:08:08 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:20:39.790 true 00:20:39.790 11:08:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:39.790 11:08:08 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:40.048 11:08:08 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:40.311 11:08:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:20:40.311 11:08:08 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:20:40.571 true 00:20:40.571 11:08:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:40.571 11:08:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:40.571 Initializing NVMe Controllers 00:20:40.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.571 Controller IO queue size 128, less than required. 00:20:40.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.571 Controller IO queue size 128, less than required. 00:20:40.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.571 Initialization complete. Launching workers. 00:20:40.571 ======================================================== 00:20:40.571 Latency(us) 00:20:40.571 Device Information : IOPS MiB/s Average min max 00:20:40.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 777.44 0.38 80284.46 3475.90 1080265.12 00:20:40.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9693.33 4.73 13204.72 4139.08 642276.17 00:20:40.571 ======================================================== 00:20:40.571 Total : 10470.78 5.11 18185.31 3475.90 1080265.12 00:20:40.571 00:20:40.829 11:08:09 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:41.087 11:08:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:20:41.087 11:08:09 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:20:41.345 true 00:20:41.345 11:08:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85093 00:20:41.345 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (85093) - No such process 00:20:41.345 11:08:09 -- target/ns_hotplug_stress.sh@44 -- # wait 85093 00:20:41.345 11:08:09 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:41.345 11:08:09 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:20:41.345 11:08:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:41.345 11:08:09 -- nvmf/common.sh@117 -- # sync 00:20:41.345 11:08:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.345 11:08:09 -- nvmf/common.sh@120 -- # set +e 00:20:41.345 11:08:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.345 11:08:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.345 rmmod nvme_tcp 00:20:41.345 rmmod nvme_fabrics 00:20:41.345 rmmod nvme_keyring 00:20:41.345 11:08:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.345 11:08:09 -- nvmf/common.sh@124 -- # set -e 00:20:41.345 11:08:09 -- nvmf/common.sh@125 -- # return 0 00:20:41.345 11:08:09 -- nvmf/common.sh@478 -- # '[' -n 84981 ']' 00:20:41.345 11:08:09 -- nvmf/common.sh@479 -- # killprocess 84981 00:20:41.345 11:08:09 -- common/autotest_common.sh@936 -- # '[' -z 84981 ']' 00:20:41.345 11:08:09 -- common/autotest_common.sh@940 -- # kill -0 84981 00:20:41.345 11:08:09 -- common/autotest_common.sh@941 -- # uname 00:20:41.345 11:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:41.345 11:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84981 00:20:41.345 killing process with pid 84981 00:20:41.345 11:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:41.345 11:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:41.345 11:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84981' 00:20:41.345 11:08:09 -- common/autotest_common.sh@955 -- # kill 84981 00:20:41.345 11:08:09 -- common/autotest_common.sh@960 -- # wait 84981 00:20:41.604 11:08:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:41.604 11:08:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:41.604 11:08:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:41.604 11:08:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.604 11:08:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.604 11:08:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.604 11:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.604 11:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.604 11:08:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:41.604 00:20:41.604 real 0m34.478s 00:20:41.604 user 2m27.963s 00:20:41.604 sys 0m8.076s 00:20:41.604 11:08:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.604 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:20:41.604 ************************************ 00:20:41.604 END TEST nvmf_ns_hotplug_stress 00:20:41.604 ************************************ 00:20:41.862 11:08:10 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:41.862 11:08:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:41.862 11:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.862 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:20:41.862 ************************************ 00:20:41.862 START TEST nvmf_connect_stress 00:20:41.862 ************************************ 00:20:41.862 11:08:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:41.862 * Looking for test storage... 00:20:41.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:41.862 11:08:10 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:41.862 11:08:10 -- nvmf/common.sh@7 -- # uname -s 00:20:41.862 11:08:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.862 11:08:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.862 11:08:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.862 11:08:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.862 11:08:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.862 11:08:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.862 11:08:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.862 11:08:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.862 11:08:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.862 11:08:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.862 11:08:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:41.862 11:08:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:41.862 11:08:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.862 11:08:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.862 11:08:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:41.862 11:08:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.862 11:08:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:41.862 11:08:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.862 11:08:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.862 11:08:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.862 11:08:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.862 11:08:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.862 11:08:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.862 11:08:10 -- paths/export.sh@5 -- # export PATH 00:20:41.862 11:08:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.862 11:08:10 -- nvmf/common.sh@47 -- # : 0 00:20:41.862 11:08:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.862 11:08:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.862 11:08:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.862 11:08:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.862 11:08:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.862 11:08:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.862 11:08:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.862 11:08:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.862 11:08:10 -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:41.862 11:08:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:41.862 11:08:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.862 11:08:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:41.862 11:08:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:41.862 11:08:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:41.862 11:08:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.862 11:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.862 11:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.862 11:08:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:41.862 11:08:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:41.862 11:08:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:41.862 11:08:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:41.862 11:08:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:41.862 11:08:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:41.862 11:08:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.862 11:08:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.862 11:08:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:41.862 11:08:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:41.862 11:08:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:41.862 11:08:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:41.862 11:08:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:41.862 11:08:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.862 11:08:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:41.862 11:08:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:41.862 11:08:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:41.862 11:08:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:41.862 11:08:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:41.862 11:08:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:41.862 Cannot find device "nvmf_tgt_br" 00:20:41.862 11:08:10 -- nvmf/common.sh@155 -- # true 00:20:41.862 11:08:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.862 Cannot find device "nvmf_tgt_br2" 00:20:41.862 11:08:10 -- nvmf/common.sh@156 -- # true 00:20:41.862 11:08:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:41.862 11:08:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:42.121 Cannot find device "nvmf_tgt_br" 00:20:42.121 11:08:10 -- nvmf/common.sh@158 -- # true 00:20:42.121 11:08:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:42.121 Cannot find device "nvmf_tgt_br2" 00:20:42.121 11:08:10 -- nvmf/common.sh@159 -- # true 00:20:42.121 11:08:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:42.121 11:08:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:42.121 11:08:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.121 11:08:10 -- nvmf/common.sh@162 -- # true 00:20:42.121 11:08:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.121 11:08:10 -- nvmf/common.sh@163 -- # true 00:20:42.121 11:08:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.121 11:08:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.121 11:08:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:42.121 11:08:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:42.121 11:08:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:42.121 11:08:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:42.121 11:08:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:42.121 11:08:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:42.121 11:08:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:42.121 11:08:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:42.121 11:08:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:42.121 11:08:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:42.121 11:08:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:42.121 11:08:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:42.121 11:08:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:42.121 11:08:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:42.121 11:08:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:42.121 11:08:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:42.121 11:08:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:42.121 11:08:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:42.121 11:08:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:42.121 11:08:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:42.121 11:08:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:42.121 11:08:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:42.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:42.121 00:20:42.121 --- 10.0.0.2 ping statistics --- 00:20:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.121 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:42.121 11:08:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:42.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:42.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:42.121 00:20:42.121 --- 10.0.0.3 ping statistics --- 00:20:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.121 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:42.121 11:08:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:42.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:20:42.380 00:20:42.380 --- 10.0.0.1 ping statistics --- 00:20:42.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.380 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:42.380 11:08:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.380 11:08:10 -- nvmf/common.sh@422 -- # return 0 00:20:42.380 11:08:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:42.380 11:08:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.380 11:08:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:42.380 11:08:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:42.380 11:08:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.380 11:08:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:42.380 11:08:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:42.380 11:08:10 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:42.380 11:08:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:42.380 11:08:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:42.380 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:20:42.380 11:08:10 -- nvmf/common.sh@470 -- # nvmfpid=86253 00:20:42.380 11:08:10 -- nvmf/common.sh@471 -- # waitforlisten 86253 00:20:42.380 11:08:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:42.380 11:08:10 -- common/autotest_common.sh@817 -- # '[' -z 86253 ']' 00:20:42.380 11:08:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.380 11:08:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:42.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.380 11:08:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.380 11:08:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:42.380 11:08:10 -- common/autotest_common.sh@10 -- # set +x 00:20:42.380 [2024-04-18 11:08:10.851473] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:42.380 [2024-04-18 11:08:10.851619] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.380 [2024-04-18 11:08:10.992993] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:42.638 [2024-04-18 11:08:11.077423] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.638 [2024-04-18 11:08:11.077491] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.638 [2024-04-18 11:08:11.077503] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.638 [2024-04-18 11:08:11.077512] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.638 [2024-04-18 11:08:11.077520] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.638 [2024-04-18 11:08:11.077851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.638 [2024-04-18 11:08:11.078152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.638 [2024-04-18 11:08:11.078156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.573 11:08:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.573 11:08:11 -- common/autotest_common.sh@850 -- # return 0 00:20:43.573 11:08:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:43.573 11:08:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:43.573 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 11:08:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.573 11:08:11 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.573 11:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.573 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 [2024-04-18 11:08:11.974762] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.573 11:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.573 11:08:11 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:43.573 11:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.573 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 11:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.573 11:08:11 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.573 11:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.573 11:08:11 -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 [2024-04-18 11:08:11.998972] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.573 11:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.573 11:08:12 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:43.573 11:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.573 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 NULL1 00:20:43.573 11:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.573 11:08:12 -- target/connect_stress.sh@21 -- # PERF_PID=86306 00:20:43.573 11:08:12 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:43.573 11:08:12 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:20:43.573 11:08:12 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # seq 1 20 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:43.573 11:08:12 -- target/connect_stress.sh@28 -- # cat 00:20:43.573 11:08:12 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:43.573 11:08:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:43.573 11:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.573 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:20:43.831 11:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.831 11:08:12 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:43.831 11:08:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:43.831 11:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.831 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:20:44.397 11:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.397 11:08:12 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:44.397 11:08:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:44.397 11:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.397 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:20:44.655 11:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.655 11:08:13 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:44.655 11:08:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:44.655 11:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.655 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:20:44.913 11:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.913 11:08:13 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:44.913 11:08:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:44.913 11:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.913 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:20:45.176 11:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.176 11:08:13 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:45.176 11:08:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:45.176 11:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.176 11:08:13 -- common/autotest_common.sh@10 -- # set +x 00:20:45.435 11:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.435 11:08:14 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:45.435 11:08:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:45.435 11:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.435 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:20:46.001 11:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.001 11:08:14 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:46.001 11:08:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:46.001 11:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.001 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 11:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.259 11:08:14 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:46.259 11:08:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:46.259 11:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.259 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:20:46.517 11:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.517 11:08:14 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:46.517 11:08:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:46.517 11:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.517 11:08:14 -- common/autotest_common.sh@10 -- # set +x 00:20:46.775 11:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.775 11:08:15 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:46.775 11:08:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:46.775 11:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.775 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:20:47.033 11:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.033 11:08:15 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:47.033 11:08:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:47.033 11:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.033 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:20:47.598 11:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.598 11:08:15 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:47.598 11:08:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:47.598 11:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.598 11:08:15 -- common/autotest_common.sh@10 -- # set +x 00:20:47.856 11:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.856 11:08:16 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:47.856 11:08:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:47.856 11:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.856 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:20:48.114 11:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.114 11:08:16 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:48.114 11:08:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.114 11:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.114 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:20:48.372 11:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.372 11:08:16 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:48.372 11:08:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.372 11:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.372 11:08:16 -- common/autotest_common.sh@10 -- # set +x 00:20:48.630 11:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.630 11:08:17 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:48.630 11:08:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.630 11:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.630 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:20:49.196 11:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.196 11:08:17 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:49.196 11:08:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.196 11:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.196 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:20:49.455 11:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.455 11:08:17 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:49.455 11:08:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.455 11:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.455 11:08:17 -- common/autotest_common.sh@10 -- # set +x 00:20:49.714 11:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.714 11:08:18 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:49.714 11:08:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.714 11:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.714 11:08:18 -- common/autotest_common.sh@10 -- # set +x 00:20:49.972 11:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.972 11:08:18 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:49.972 11:08:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.972 11:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.972 11:08:18 -- common/autotest_common.sh@10 -- # set +x 00:20:50.231 11:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.231 11:08:18 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:50.231 11:08:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.231 11:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.231 11:08:18 -- common/autotest_common.sh@10 -- # set +x 00:20:50.797 11:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.797 11:08:19 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:50.797 11:08:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.797 11:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.797 11:08:19 -- common/autotest_common.sh@10 -- # set +x 00:20:51.056 11:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.056 11:08:19 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:51.056 11:08:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.056 11:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.056 11:08:19 -- common/autotest_common.sh@10 -- # set +x 00:20:51.315 11:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.315 11:08:19 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:51.315 11:08:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.315 11:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.315 11:08:19 -- common/autotest_common.sh@10 -- # set +x 00:20:51.573 11:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.573 11:08:20 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:51.573 11:08:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.573 11:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.573 11:08:20 -- common/autotest_common.sh@10 -- # set +x 00:20:51.831 11:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.831 11:08:20 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:51.831 11:08:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.831 11:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.831 11:08:20 -- common/autotest_common.sh@10 -- # set +x 00:20:52.397 11:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.397 11:08:20 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:52.397 11:08:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:52.397 11:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.397 11:08:20 -- common/autotest_common.sh@10 -- # set +x 00:20:52.656 11:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.656 11:08:21 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:52.656 11:08:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:52.656 11:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.656 11:08:21 -- common/autotest_common.sh@10 -- # set +x 00:20:52.920 11:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.920 11:08:21 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:52.920 11:08:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:52.920 11:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.920 11:08:21 -- common/autotest_common.sh@10 -- # set +x 00:20:53.177 11:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.177 11:08:21 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:53.177 11:08:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:53.177 11:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.177 11:08:21 -- common/autotest_common.sh@10 -- # set +x 00:20:53.436 11:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.436 11:08:22 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:53.436 11:08:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:53.436 11:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.436 11:08:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.693 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.950 11:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.950 11:08:22 -- target/connect_stress.sh@34 -- # kill -0 86306 00:20:53.950 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (86306) - No such process 00:20:53.950 11:08:22 -- target/connect_stress.sh@38 -- # wait 86306 00:20:53.950 11:08:22 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:20:53.950 11:08:22 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:53.950 11:08:22 -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:53.950 11:08:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:53.950 11:08:22 -- nvmf/common.sh@117 -- # sync 00:20:53.950 11:08:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.950 11:08:22 -- nvmf/common.sh@120 -- # set +e 00:20:53.950 11:08:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.950 11:08:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.950 rmmod nvme_tcp 00:20:53.950 rmmod nvme_fabrics 00:20:53.950 rmmod nvme_keyring 00:20:53.950 11:08:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.950 11:08:22 -- nvmf/common.sh@124 -- # set -e 00:20:53.950 11:08:22 -- nvmf/common.sh@125 -- # return 0 00:20:53.950 11:08:22 -- nvmf/common.sh@478 -- # '[' -n 86253 ']' 00:20:53.950 11:08:22 -- nvmf/common.sh@479 -- # killprocess 86253 00:20:53.950 11:08:22 -- common/autotest_common.sh@936 -- # '[' -z 86253 ']' 00:20:53.950 11:08:22 -- common/autotest_common.sh@940 -- # kill -0 86253 00:20:53.950 11:08:22 -- common/autotest_common.sh@941 -- # uname 00:20:53.950 11:08:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:53.950 11:08:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86253 00:20:53.950 11:08:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:53.950 11:08:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:53.950 11:08:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86253' 00:20:53.950 killing process with pid 86253 00:20:53.950 11:08:22 -- common/autotest_common.sh@955 -- # kill 86253 00:20:53.950 11:08:22 -- common/autotest_common.sh@960 -- # wait 86253 00:20:54.208 11:08:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:54.208 11:08:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:54.208 11:08:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:54.208 11:08:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.208 11:08:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.208 11:08:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.208 11:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.208 11:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.208 11:08:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:54.208 ************************************ 00:20:54.208 END TEST nvmf_connect_stress 00:20:54.208 ************************************ 00:20:54.208 00:20:54.208 real 0m12.422s 00:20:54.208 user 0m41.297s 00:20:54.208 sys 0m3.434s 00:20:54.208 11:08:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:54.208 11:08:22 -- common/autotest_common.sh@10 -- # set +x 00:20:54.208 11:08:22 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:54.208 11:08:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:54.208 11:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:54.208 11:08:22 -- common/autotest_common.sh@10 -- # set +x 00:20:54.466 ************************************ 00:20:54.466 START TEST nvmf_fused_ordering 00:20:54.466 ************************************ 00:20:54.466 11:08:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:54.466 * Looking for test storage... 00:20:54.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:54.466 11:08:22 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.466 11:08:22 -- nvmf/common.sh@7 -- # uname -s 00:20:54.466 11:08:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.466 11:08:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.466 11:08:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.466 11:08:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.466 11:08:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.466 11:08:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.466 11:08:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.466 11:08:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.466 11:08:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.466 11:08:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.466 11:08:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:54.466 11:08:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:54.466 11:08:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.466 11:08:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.466 11:08:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.466 11:08:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.466 11:08:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.466 11:08:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.466 11:08:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.466 11:08:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.466 11:08:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.466 11:08:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.467 11:08:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.467 11:08:22 -- paths/export.sh@5 -- # export PATH 00:20:54.467 11:08:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.467 11:08:22 -- nvmf/common.sh@47 -- # : 0 00:20:54.467 11:08:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.467 11:08:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.467 11:08:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.467 11:08:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.467 11:08:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.467 11:08:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.467 11:08:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.467 11:08:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.467 11:08:22 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:54.467 11:08:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:54.467 11:08:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.467 11:08:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:54.467 11:08:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:54.467 11:08:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:54.467 11:08:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.467 11:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.467 11:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.467 11:08:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:54.467 11:08:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:54.467 11:08:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:54.467 11:08:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:54.467 11:08:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:54.467 11:08:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:54.467 11:08:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.467 11:08:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.467 11:08:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:54.467 11:08:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:54.467 11:08:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:54.467 11:08:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:54.467 11:08:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:54.467 11:08:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.467 11:08:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:54.467 11:08:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:54.467 11:08:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:54.467 11:08:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:54.467 11:08:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:54.467 11:08:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:54.467 Cannot find device "nvmf_tgt_br" 00:20:54.467 11:08:23 -- nvmf/common.sh@155 -- # true 00:20:54.467 11:08:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.467 Cannot find device "nvmf_tgt_br2" 00:20:54.467 11:08:23 -- nvmf/common.sh@156 -- # true 00:20:54.467 11:08:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:54.467 11:08:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:54.467 Cannot find device "nvmf_tgt_br" 00:20:54.467 11:08:23 -- nvmf/common.sh@158 -- # true 00:20:54.467 11:08:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:54.467 Cannot find device "nvmf_tgt_br2" 00:20:54.467 11:08:23 -- nvmf/common.sh@159 -- # true 00:20:54.467 11:08:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:54.467 11:08:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:54.725 11:08:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.725 11:08:23 -- nvmf/common.sh@162 -- # true 00:20:54.725 11:08:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.725 11:08:23 -- nvmf/common.sh@163 -- # true 00:20:54.725 11:08:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:54.725 11:08:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:54.725 11:08:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:54.725 11:08:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:54.725 11:08:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:54.725 11:08:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.725 11:08:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.725 11:08:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:54.725 11:08:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:54.725 11:08:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:54.725 11:08:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:54.725 11:08:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:54.725 11:08:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:54.725 11:08:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.725 11:08:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.725 11:08:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.725 11:08:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:54.725 11:08:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:54.725 11:08:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.725 11:08:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.725 11:08:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.725 11:08:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.725 11:08:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.725 11:08:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:54.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:20:54.725 00:20:54.725 --- 10.0.0.2 ping statistics --- 00:20:54.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.725 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:20:54.725 11:08:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:54.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:20:54.725 00:20:54.725 --- 10.0.0.3 ping statistics --- 00:20:54.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.725 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:54.725 11:08:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:54.725 00:20:54.725 --- 10.0.0.1 ping statistics --- 00:20:54.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.725 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:54.725 11:08:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.725 11:08:23 -- nvmf/common.sh@422 -- # return 0 00:20:54.725 11:08:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:54.725 11:08:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.725 11:08:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:54.725 11:08:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:54.725 11:08:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.725 11:08:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:54.725 11:08:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:54.725 11:08:23 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:54.725 11:08:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:54.725 11:08:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:54.725 11:08:23 -- common/autotest_common.sh@10 -- # set +x 00:20:54.725 11:08:23 -- nvmf/common.sh@470 -- # nvmfpid=86646 00:20:54.725 11:08:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.725 11:08:23 -- nvmf/common.sh@471 -- # waitforlisten 86646 00:20:54.725 11:08:23 -- common/autotest_common.sh@817 -- # '[' -z 86646 ']' 00:20:54.725 11:08:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.725 11:08:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:54.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.725 11:08:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.725 11:08:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:54.725 11:08:23 -- common/autotest_common.sh@10 -- # set +x 00:20:54.983 [2024-04-18 11:08:23.424186] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:54.983 [2024-04-18 11:08:23.424284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.983 [2024-04-18 11:08:23.568415] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.241 [2024-04-18 11:08:23.664490] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.241 [2024-04-18 11:08:23.664558] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.241 [2024-04-18 11:08:23.664586] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.241 [2024-04-18 11:08:23.664595] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.241 [2024-04-18 11:08:23.664602] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.241 [2024-04-18 11:08:23.664636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.805 11:08:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:55.805 11:08:24 -- common/autotest_common.sh@850 -- # return 0 00:20:55.805 11:08:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:55.805 11:08:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:55.805 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:20:56.063 11:08:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.063 11:08:24 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.063 11:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.063 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:20:56.063 [2024-04-18 11:08:24.491658] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.063 11:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.063 11:08:24 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:56.063 11:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.063 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:20:56.063 11:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.063 11:08:24 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.063 11:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.063 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:20:56.063 [2024-04-18 11:08:24.507746] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.063 11:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.063 11:08:24 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:56.063 11:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.063 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:20:56.063 NULL1 00:20:56.063 11:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.063 11:08:24 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:56.063 11:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.063 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:20:56.063 11:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.063 11:08:24 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:56.063 11:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.063 11:08:24 -- common/autotest_common.sh@10 -- # set +x 00:20:56.063 11:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.063 11:08:24 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:56.063 [2024-04-18 11:08:24.560553] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:56.063 [2024-04-18 11:08:24.560615] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86696 ] 00:20:56.627 Attached to nqn.2016-06.io.spdk:cnode1 00:20:56.627 Namespace ID: 1 size: 1GB 00:20:56.627 fused_ordering(0) 00:20:56.627 fused_ordering(1) 00:20:56.627 fused_ordering(2) 00:20:56.627 fused_ordering(3) 00:20:56.627 fused_ordering(4) 00:20:56.627 fused_ordering(5) 00:20:56.627 fused_ordering(6) 00:20:56.627 fused_ordering(7) 00:20:56.627 fused_ordering(8) 00:20:56.627 fused_ordering(9) 00:20:56.627 fused_ordering(10) 00:20:56.627 fused_ordering(11) 00:20:56.627 fused_ordering(12) 00:20:56.627 fused_ordering(13) 00:20:56.627 fused_ordering(14) 00:20:56.627 fused_ordering(15) 00:20:56.627 fused_ordering(16) 00:20:56.627 fused_ordering(17) 00:20:56.627 fused_ordering(18) 00:20:56.627 fused_ordering(19) 00:20:56.627 fused_ordering(20) 00:20:56.627 fused_ordering(21) 00:20:56.627 fused_ordering(22) 00:20:56.627 fused_ordering(23) 00:20:56.627 fused_ordering(24) 00:20:56.627 fused_ordering(25) 00:20:56.627 fused_ordering(26) 00:20:56.627 fused_ordering(27) 00:20:56.627 fused_ordering(28) 00:20:56.627 fused_ordering(29) 00:20:56.627 fused_ordering(30) 00:20:56.627 fused_ordering(31) 00:20:56.627 fused_ordering(32) 00:20:56.627 fused_ordering(33) 00:20:56.627 fused_ordering(34) 00:20:56.627 fused_ordering(35) 00:20:56.627 fused_ordering(36) 00:20:56.627 fused_ordering(37) 00:20:56.627 fused_ordering(38) 00:20:56.627 fused_ordering(39) 00:20:56.627 fused_ordering(40) 00:20:56.627 fused_ordering(41) 00:20:56.628 fused_ordering(42) 00:20:56.628 fused_ordering(43) 00:20:56.628 fused_ordering(44) 00:20:56.628 fused_ordering(45) 00:20:56.628 fused_ordering(46) 00:20:56.628 fused_ordering(47) 00:20:56.628 fused_ordering(48) 00:20:56.628 fused_ordering(49) 00:20:56.628 fused_ordering(50) 00:20:56.628 fused_ordering(51) 00:20:56.628 fused_ordering(52) 00:20:56.628 fused_ordering(53) 00:20:56.628 fused_ordering(54) 00:20:56.628 fused_ordering(55) 00:20:56.628 fused_ordering(56) 00:20:56.628 fused_ordering(57) 00:20:56.628 fused_ordering(58) 00:20:56.628 fused_ordering(59) 00:20:56.628 fused_ordering(60) 00:20:56.628 fused_ordering(61) 00:20:56.628 fused_ordering(62) 00:20:56.628 fused_ordering(63) 00:20:56.628 fused_ordering(64) 00:20:56.628 fused_ordering(65) 00:20:56.628 fused_ordering(66) 00:20:56.628 fused_ordering(67) 00:20:56.628 fused_ordering(68) 00:20:56.628 fused_ordering(69) 00:20:56.628 fused_ordering(70) 00:20:56.628 fused_ordering(71) 00:20:56.628 fused_ordering(72) 00:20:56.628 fused_ordering(73) 00:20:56.628 fused_ordering(74) 00:20:56.628 fused_ordering(75) 00:20:56.628 fused_ordering(76) 00:20:56.628 fused_ordering(77) 00:20:56.628 fused_ordering(78) 00:20:56.628 fused_ordering(79) 00:20:56.628 fused_ordering(80) 00:20:56.628 fused_ordering(81) 00:20:56.628 fused_ordering(82) 00:20:56.628 fused_ordering(83) 00:20:56.628 fused_ordering(84) 00:20:56.628 fused_ordering(85) 00:20:56.628 fused_ordering(86) 00:20:56.628 fused_ordering(87) 00:20:56.628 fused_ordering(88) 00:20:56.628 fused_ordering(89) 00:20:56.628 fused_ordering(90) 00:20:56.628 fused_ordering(91) 00:20:56.628 fused_ordering(92) 00:20:56.628 fused_ordering(93) 00:20:56.628 fused_ordering(94) 00:20:56.628 fused_ordering(95) 00:20:56.628 fused_ordering(96) 00:20:56.628 fused_ordering(97) 00:20:56.628 fused_ordering(98) 00:20:56.628 fused_ordering(99) 00:20:56.628 fused_ordering(100) 00:20:56.628 fused_ordering(101) 00:20:56.628 fused_ordering(102) 00:20:56.628 fused_ordering(103) 00:20:56.628 fused_ordering(104) 00:20:56.628 fused_ordering(105) 00:20:56.628 fused_ordering(106) 00:20:56.628 fused_ordering(107) 00:20:56.628 fused_ordering(108) 00:20:56.628 fused_ordering(109) 00:20:56.628 fused_ordering(110) 00:20:56.628 fused_ordering(111) 00:20:56.628 fused_ordering(112) 00:20:56.628 fused_ordering(113) 00:20:56.628 fused_ordering(114) 00:20:56.628 fused_ordering(115) 00:20:56.628 fused_ordering(116) 00:20:56.628 fused_ordering(117) 00:20:56.628 fused_ordering(118) 00:20:56.628 fused_ordering(119) 00:20:56.628 fused_ordering(120) 00:20:56.628 fused_ordering(121) 00:20:56.628 fused_ordering(122) 00:20:56.628 fused_ordering(123) 00:20:56.628 fused_ordering(124) 00:20:56.628 fused_ordering(125) 00:20:56.628 fused_ordering(126) 00:20:56.628 fused_ordering(127) 00:20:56.628 fused_ordering(128) 00:20:56.628 fused_ordering(129) 00:20:56.628 fused_ordering(130) 00:20:56.628 fused_ordering(131) 00:20:56.628 fused_ordering(132) 00:20:56.628 fused_ordering(133) 00:20:56.628 fused_ordering(134) 00:20:56.628 fused_ordering(135) 00:20:56.628 fused_ordering(136) 00:20:56.628 fused_ordering(137) 00:20:56.628 fused_ordering(138) 00:20:56.628 fused_ordering(139) 00:20:56.628 fused_ordering(140) 00:20:56.628 fused_ordering(141) 00:20:56.628 fused_ordering(142) 00:20:56.628 fused_ordering(143) 00:20:56.628 fused_ordering(144) 00:20:56.628 fused_ordering(145) 00:20:56.628 fused_ordering(146) 00:20:56.628 fused_ordering(147) 00:20:56.628 fused_ordering(148) 00:20:56.628 fused_ordering(149) 00:20:56.628 fused_ordering(150) 00:20:56.628 fused_ordering(151) 00:20:56.628 fused_ordering(152) 00:20:56.628 fused_ordering(153) 00:20:56.628 fused_ordering(154) 00:20:56.628 fused_ordering(155) 00:20:56.628 fused_ordering(156) 00:20:56.628 fused_ordering(157) 00:20:56.628 fused_ordering(158) 00:20:56.628 fused_ordering(159) 00:20:56.628 fused_ordering(160) 00:20:56.628 fused_ordering(161) 00:20:56.628 fused_ordering(162) 00:20:56.628 fused_ordering(163) 00:20:56.628 fused_ordering(164) 00:20:56.628 fused_ordering(165) 00:20:56.628 fused_ordering(166) 00:20:56.628 fused_ordering(167) 00:20:56.628 fused_ordering(168) 00:20:56.628 fused_ordering(169) 00:20:56.628 fused_ordering(170) 00:20:56.628 fused_ordering(171) 00:20:56.628 fused_ordering(172) 00:20:56.628 fused_ordering(173) 00:20:56.628 fused_ordering(174) 00:20:56.628 fused_ordering(175) 00:20:56.628 fused_ordering(176) 00:20:56.628 fused_ordering(177) 00:20:56.628 fused_ordering(178) 00:20:56.628 fused_ordering(179) 00:20:56.628 fused_ordering(180) 00:20:56.628 fused_ordering(181) 00:20:56.628 fused_ordering(182) 00:20:56.628 fused_ordering(183) 00:20:56.628 fused_ordering(184) 00:20:56.628 fused_ordering(185) 00:20:56.628 fused_ordering(186) 00:20:56.628 fused_ordering(187) 00:20:56.628 fused_ordering(188) 00:20:56.628 fused_ordering(189) 00:20:56.628 fused_ordering(190) 00:20:56.628 fused_ordering(191) 00:20:56.628 fused_ordering(192) 00:20:56.628 fused_ordering(193) 00:20:56.628 fused_ordering(194) 00:20:56.628 fused_ordering(195) 00:20:56.628 fused_ordering(196) 00:20:56.628 fused_ordering(197) 00:20:56.628 fused_ordering(198) 00:20:56.628 fused_ordering(199) 00:20:56.628 fused_ordering(200) 00:20:56.628 fused_ordering(201) 00:20:56.628 fused_ordering(202) 00:20:56.628 fused_ordering(203) 00:20:56.628 fused_ordering(204) 00:20:56.628 fused_ordering(205) 00:20:56.885 fused_ordering(206) 00:20:56.885 fused_ordering(207) 00:20:56.885 fused_ordering(208) 00:20:56.885 fused_ordering(209) 00:20:56.885 fused_ordering(210) 00:20:56.885 fused_ordering(211) 00:20:56.885 fused_ordering(212) 00:20:56.886 fused_ordering(213) 00:20:56.886 fused_ordering(214) 00:20:56.886 fused_ordering(215) 00:20:56.886 fused_ordering(216) 00:20:56.886 fused_ordering(217) 00:20:56.886 fused_ordering(218) 00:20:56.886 fused_ordering(219) 00:20:56.886 fused_ordering(220) 00:20:56.886 fused_ordering(221) 00:20:56.886 fused_ordering(222) 00:20:56.886 fused_ordering(223) 00:20:56.886 fused_ordering(224) 00:20:56.886 fused_ordering(225) 00:20:56.886 fused_ordering(226) 00:20:56.886 fused_ordering(227) 00:20:56.886 fused_ordering(228) 00:20:56.886 fused_ordering(229) 00:20:56.886 fused_ordering(230) 00:20:56.886 fused_ordering(231) 00:20:56.886 fused_ordering(232) 00:20:56.886 fused_ordering(233) 00:20:56.886 fused_ordering(234) 00:20:56.886 fused_ordering(235) 00:20:56.886 fused_ordering(236) 00:20:56.886 fused_ordering(237) 00:20:56.886 fused_ordering(238) 00:20:56.886 fused_ordering(239) 00:20:56.886 fused_ordering(240) 00:20:56.886 fused_ordering(241) 00:20:56.886 fused_ordering(242) 00:20:56.886 fused_ordering(243) 00:20:56.886 fused_ordering(244) 00:20:56.886 fused_ordering(245) 00:20:56.886 fused_ordering(246) 00:20:56.886 fused_ordering(247) 00:20:56.886 fused_ordering(248) 00:20:56.886 fused_ordering(249) 00:20:56.886 fused_ordering(250) 00:20:56.886 fused_ordering(251) 00:20:56.886 fused_ordering(252) 00:20:56.886 fused_ordering(253) 00:20:56.886 fused_ordering(254) 00:20:56.886 fused_ordering(255) 00:20:56.886 fused_ordering(256) 00:20:56.886 fused_ordering(257) 00:20:56.886 fused_ordering(258) 00:20:56.886 fused_ordering(259) 00:20:56.886 fused_ordering(260) 00:20:56.886 fused_ordering(261) 00:20:56.886 fused_ordering(262) 00:20:56.886 fused_ordering(263) 00:20:56.886 fused_ordering(264) 00:20:56.886 fused_ordering(265) 00:20:56.886 fused_ordering(266) 00:20:56.886 fused_ordering(267) 00:20:56.886 fused_ordering(268) 00:20:56.886 fused_ordering(269) 00:20:56.886 fused_ordering(270) 00:20:56.886 fused_ordering(271) 00:20:56.886 fused_ordering(272) 00:20:56.886 fused_ordering(273) 00:20:56.886 fused_ordering(274) 00:20:56.886 fused_ordering(275) 00:20:56.886 fused_ordering(276) 00:20:56.886 fused_ordering(277) 00:20:56.886 fused_ordering(278) 00:20:56.886 fused_ordering(279) 00:20:56.886 fused_ordering(280) 00:20:56.886 fused_ordering(281) 00:20:56.886 fused_ordering(282) 00:20:56.886 fused_ordering(283) 00:20:56.886 fused_ordering(284) 00:20:56.886 fused_ordering(285) 00:20:56.886 fused_ordering(286) 00:20:56.886 fused_ordering(287) 00:20:56.886 fused_ordering(288) 00:20:56.886 fused_ordering(289) 00:20:56.886 fused_ordering(290) 00:20:56.886 fused_ordering(291) 00:20:56.886 fused_ordering(292) 00:20:56.886 fused_ordering(293) 00:20:56.886 fused_ordering(294) 00:20:56.886 fused_ordering(295) 00:20:56.886 fused_ordering(296) 00:20:56.886 fused_ordering(297) 00:20:56.886 fused_ordering(298) 00:20:56.886 fused_ordering(299) 00:20:56.886 fused_ordering(300) 00:20:56.886 fused_ordering(301) 00:20:56.886 fused_ordering(302) 00:20:56.886 fused_ordering(303) 00:20:56.886 fused_ordering(304) 00:20:56.886 fused_ordering(305) 00:20:56.886 fused_ordering(306) 00:20:56.886 fused_ordering(307) 00:20:56.886 fused_ordering(308) 00:20:56.886 fused_ordering(309) 00:20:56.886 fused_ordering(310) 00:20:56.886 fused_ordering(311) 00:20:56.886 fused_ordering(312) 00:20:56.886 fused_ordering(313) 00:20:56.886 fused_ordering(314) 00:20:56.886 fused_ordering(315) 00:20:56.886 fused_ordering(316) 00:20:56.886 fused_ordering(317) 00:20:56.886 fused_ordering(318) 00:20:56.886 fused_ordering(319) 00:20:56.886 fused_ordering(320) 00:20:56.886 fused_ordering(321) 00:20:56.886 fused_ordering(322) 00:20:56.886 fused_ordering(323) 00:20:56.886 fused_ordering(324) 00:20:56.886 fused_ordering(325) 00:20:56.886 fused_ordering(326) 00:20:56.886 fused_ordering(327) 00:20:56.886 fused_ordering(328) 00:20:56.886 fused_ordering(329) 00:20:56.886 fused_ordering(330) 00:20:56.886 fused_ordering(331) 00:20:56.886 fused_ordering(332) 00:20:56.886 fused_ordering(333) 00:20:56.886 fused_ordering(334) 00:20:56.886 fused_ordering(335) 00:20:56.886 fused_ordering(336) 00:20:56.886 fused_ordering(337) 00:20:56.886 fused_ordering(338) 00:20:56.886 fused_ordering(339) 00:20:56.886 fused_ordering(340) 00:20:56.886 fused_ordering(341) 00:20:56.886 fused_ordering(342) 00:20:56.886 fused_ordering(343) 00:20:56.886 fused_ordering(344) 00:20:56.886 fused_ordering(345) 00:20:56.886 fused_ordering(346) 00:20:56.886 fused_ordering(347) 00:20:56.886 fused_ordering(348) 00:20:56.886 fused_ordering(349) 00:20:56.886 fused_ordering(350) 00:20:56.886 fused_ordering(351) 00:20:56.886 fused_ordering(352) 00:20:56.886 fused_ordering(353) 00:20:56.886 fused_ordering(354) 00:20:56.886 fused_ordering(355) 00:20:56.886 fused_ordering(356) 00:20:56.886 fused_ordering(357) 00:20:56.886 fused_ordering(358) 00:20:56.886 fused_ordering(359) 00:20:56.886 fused_ordering(360) 00:20:56.886 fused_ordering(361) 00:20:56.886 fused_ordering(362) 00:20:56.886 fused_ordering(363) 00:20:56.886 fused_ordering(364) 00:20:56.886 fused_ordering(365) 00:20:56.886 fused_ordering(366) 00:20:56.886 fused_ordering(367) 00:20:56.886 fused_ordering(368) 00:20:56.886 fused_ordering(369) 00:20:56.886 fused_ordering(370) 00:20:56.886 fused_ordering(371) 00:20:56.886 fused_ordering(372) 00:20:56.886 fused_ordering(373) 00:20:56.886 fused_ordering(374) 00:20:56.886 fused_ordering(375) 00:20:56.886 fused_ordering(376) 00:20:56.886 fused_ordering(377) 00:20:56.886 fused_ordering(378) 00:20:56.886 fused_ordering(379) 00:20:56.886 fused_ordering(380) 00:20:56.886 fused_ordering(381) 00:20:56.886 fused_ordering(382) 00:20:56.886 fused_ordering(383) 00:20:56.886 fused_ordering(384) 00:20:56.886 fused_ordering(385) 00:20:56.886 fused_ordering(386) 00:20:56.886 fused_ordering(387) 00:20:56.886 fused_ordering(388) 00:20:56.886 fused_ordering(389) 00:20:56.886 fused_ordering(390) 00:20:56.886 fused_ordering(391) 00:20:56.886 fused_ordering(392) 00:20:56.886 fused_ordering(393) 00:20:56.886 fused_ordering(394) 00:20:56.886 fused_ordering(395) 00:20:56.886 fused_ordering(396) 00:20:56.886 fused_ordering(397) 00:20:56.886 fused_ordering(398) 00:20:56.886 fused_ordering(399) 00:20:56.886 fused_ordering(400) 00:20:56.886 fused_ordering(401) 00:20:56.886 fused_ordering(402) 00:20:56.886 fused_ordering(403) 00:20:56.886 fused_ordering(404) 00:20:56.886 fused_ordering(405) 00:20:56.886 fused_ordering(406) 00:20:56.886 fused_ordering(407) 00:20:56.886 fused_ordering(408) 00:20:56.886 fused_ordering(409) 00:20:56.886 fused_ordering(410) 00:20:57.144 fused_ordering(411) 00:20:57.144 fused_ordering(412) 00:20:57.144 fused_ordering(413) 00:20:57.144 fused_ordering(414) 00:20:57.144 fused_ordering(415) 00:20:57.144 fused_ordering(416) 00:20:57.144 fused_ordering(417) 00:20:57.144 fused_ordering(418) 00:20:57.144 fused_ordering(419) 00:20:57.144 fused_ordering(420) 00:20:57.144 fused_ordering(421) 00:20:57.144 fused_ordering(422) 00:20:57.144 fused_ordering(423) 00:20:57.144 fused_ordering(424) 00:20:57.144 fused_ordering(425) 00:20:57.144 fused_ordering(426) 00:20:57.144 fused_ordering(427) 00:20:57.144 fused_ordering(428) 00:20:57.144 fused_ordering(429) 00:20:57.144 fused_ordering(430) 00:20:57.144 fused_ordering(431) 00:20:57.144 fused_ordering(432) 00:20:57.144 fused_ordering(433) 00:20:57.144 fused_ordering(434) 00:20:57.144 fused_ordering(435) 00:20:57.144 fused_ordering(436) 00:20:57.144 fused_ordering(437) 00:20:57.144 fused_ordering(438) 00:20:57.144 fused_ordering(439) 00:20:57.144 fused_ordering(440) 00:20:57.144 fused_ordering(441) 00:20:57.144 fused_ordering(442) 00:20:57.144 fused_ordering(443) 00:20:57.144 fused_ordering(444) 00:20:57.144 fused_ordering(445) 00:20:57.144 fused_ordering(446) 00:20:57.144 fused_ordering(447) 00:20:57.144 fused_ordering(448) 00:20:57.144 fused_ordering(449) 00:20:57.144 fused_ordering(450) 00:20:57.144 fused_ordering(451) 00:20:57.144 fused_ordering(452) 00:20:57.144 fused_ordering(453) 00:20:57.144 fused_ordering(454) 00:20:57.144 fused_ordering(455) 00:20:57.145 fused_ordering(456) 00:20:57.145 fused_ordering(457) 00:20:57.145 fused_ordering(458) 00:20:57.145 fused_ordering(459) 00:20:57.145 fused_ordering(460) 00:20:57.145 fused_ordering(461) 00:20:57.145 fused_ordering(462) 00:20:57.145 fused_ordering(463) 00:20:57.145 fused_ordering(464) 00:20:57.145 fused_ordering(465) 00:20:57.145 fused_ordering(466) 00:20:57.145 fused_ordering(467) 00:20:57.145 fused_ordering(468) 00:20:57.145 fused_ordering(469) 00:20:57.145 fused_ordering(470) 00:20:57.145 fused_ordering(471) 00:20:57.145 fused_ordering(472) 00:20:57.145 fused_ordering(473) 00:20:57.145 fused_ordering(474) 00:20:57.145 fused_ordering(475) 00:20:57.145 fused_ordering(476) 00:20:57.145 fused_ordering(477) 00:20:57.145 fused_ordering(478) 00:20:57.145 fused_ordering(479) 00:20:57.145 fused_ordering(480) 00:20:57.145 fused_ordering(481) 00:20:57.145 fused_ordering(482) 00:20:57.145 fused_ordering(483) 00:20:57.145 fused_ordering(484) 00:20:57.145 fused_ordering(485) 00:20:57.145 fused_ordering(486) 00:20:57.145 fused_ordering(487) 00:20:57.145 fused_ordering(488) 00:20:57.145 fused_ordering(489) 00:20:57.145 fused_ordering(490) 00:20:57.145 fused_ordering(491) 00:20:57.145 fused_ordering(492) 00:20:57.145 fused_ordering(493) 00:20:57.145 fused_ordering(494) 00:20:57.145 fused_ordering(495) 00:20:57.145 fused_ordering(496) 00:20:57.145 fused_ordering(497) 00:20:57.145 fused_ordering(498) 00:20:57.145 fused_ordering(499) 00:20:57.145 fused_ordering(500) 00:20:57.145 fused_ordering(501) 00:20:57.145 fused_ordering(502) 00:20:57.145 fused_ordering(503) 00:20:57.145 fused_ordering(504) 00:20:57.145 fused_ordering(505) 00:20:57.145 fused_ordering(506) 00:20:57.145 fused_ordering(507) 00:20:57.145 fused_ordering(508) 00:20:57.145 fused_ordering(509) 00:20:57.145 fused_ordering(510) 00:20:57.145 fused_ordering(511) 00:20:57.145 fused_ordering(512) 00:20:57.145 fused_ordering(513) 00:20:57.145 fused_ordering(514) 00:20:57.145 fused_ordering(515) 00:20:57.145 fused_ordering(516) 00:20:57.145 fused_ordering(517) 00:20:57.145 fused_ordering(518) 00:20:57.145 fused_ordering(519) 00:20:57.145 fused_ordering(520) 00:20:57.145 fused_ordering(521) 00:20:57.145 fused_ordering(522) 00:20:57.145 fused_ordering(523) 00:20:57.145 fused_ordering(524) 00:20:57.145 fused_ordering(525) 00:20:57.145 fused_ordering(526) 00:20:57.145 fused_ordering(527) 00:20:57.145 fused_ordering(528) 00:20:57.145 fused_ordering(529) 00:20:57.145 fused_ordering(530) 00:20:57.145 fused_ordering(531) 00:20:57.145 fused_ordering(532) 00:20:57.145 fused_ordering(533) 00:20:57.145 fused_ordering(534) 00:20:57.145 fused_ordering(535) 00:20:57.145 fused_ordering(536) 00:20:57.145 fused_ordering(537) 00:20:57.145 fused_ordering(538) 00:20:57.145 fused_ordering(539) 00:20:57.145 fused_ordering(540) 00:20:57.145 fused_ordering(541) 00:20:57.145 fused_ordering(542) 00:20:57.145 fused_ordering(543) 00:20:57.145 fused_ordering(544) 00:20:57.145 fused_ordering(545) 00:20:57.145 fused_ordering(546) 00:20:57.145 fused_ordering(547) 00:20:57.145 fused_ordering(548) 00:20:57.145 fused_ordering(549) 00:20:57.145 fused_ordering(550) 00:20:57.145 fused_ordering(551) 00:20:57.145 fused_ordering(552) 00:20:57.145 fused_ordering(553) 00:20:57.145 fused_ordering(554) 00:20:57.145 fused_ordering(555) 00:20:57.145 fused_ordering(556) 00:20:57.145 fused_ordering(557) 00:20:57.145 fused_ordering(558) 00:20:57.145 fused_ordering(559) 00:20:57.145 fused_ordering(560) 00:20:57.145 fused_ordering(561) 00:20:57.145 fused_ordering(562) 00:20:57.145 fused_ordering(563) 00:20:57.145 fused_ordering(564) 00:20:57.145 fused_ordering(565) 00:20:57.145 fused_ordering(566) 00:20:57.145 fused_ordering(567) 00:20:57.145 fused_ordering(568) 00:20:57.145 fused_ordering(569) 00:20:57.145 fused_ordering(570) 00:20:57.145 fused_ordering(571) 00:20:57.145 fused_ordering(572) 00:20:57.145 fused_ordering(573) 00:20:57.145 fused_ordering(574) 00:20:57.145 fused_ordering(575) 00:20:57.145 fused_ordering(576) 00:20:57.145 fused_ordering(577) 00:20:57.145 fused_ordering(578) 00:20:57.145 fused_ordering(579) 00:20:57.145 fused_ordering(580) 00:20:57.145 fused_ordering(581) 00:20:57.145 fused_ordering(582) 00:20:57.145 fused_ordering(583) 00:20:57.145 fused_ordering(584) 00:20:57.145 fused_ordering(585) 00:20:57.145 fused_ordering(586) 00:20:57.145 fused_ordering(587) 00:20:57.145 fused_ordering(588) 00:20:57.145 fused_ordering(589) 00:20:57.145 fused_ordering(590) 00:20:57.145 fused_ordering(591) 00:20:57.145 fused_ordering(592) 00:20:57.145 fused_ordering(593) 00:20:57.145 fused_ordering(594) 00:20:57.145 fused_ordering(595) 00:20:57.145 fused_ordering(596) 00:20:57.145 fused_ordering(597) 00:20:57.145 fused_ordering(598) 00:20:57.145 fused_ordering(599) 00:20:57.145 fused_ordering(600) 00:20:57.145 fused_ordering(601) 00:20:57.145 fused_ordering(602) 00:20:57.145 fused_ordering(603) 00:20:57.145 fused_ordering(604) 00:20:57.145 fused_ordering(605) 00:20:57.145 fused_ordering(606) 00:20:57.145 fused_ordering(607) 00:20:57.145 fused_ordering(608) 00:20:57.145 fused_ordering(609) 00:20:57.145 fused_ordering(610) 00:20:57.145 fused_ordering(611) 00:20:57.145 fused_ordering(612) 00:20:57.145 fused_ordering(613) 00:20:57.145 fused_ordering(614) 00:20:57.145 fused_ordering(615) 00:20:57.769 fused_ordering(616) 00:20:57.769 fused_ordering(617) 00:20:57.769 fused_ordering(618) 00:20:57.769 fused_ordering(619) 00:20:57.769 fused_ordering(620) 00:20:57.769 fused_ordering(621) 00:20:57.769 fused_ordering(622) 00:20:57.769 fused_ordering(623) 00:20:57.769 fused_ordering(624) 00:20:57.769 fused_ordering(625) 00:20:57.769 fused_ordering(626) 00:20:57.769 fused_ordering(627) 00:20:57.769 fused_ordering(628) 00:20:57.769 fused_ordering(629) 00:20:57.769 fused_ordering(630) 00:20:57.769 fused_ordering(631) 00:20:57.769 fused_ordering(632) 00:20:57.769 fused_ordering(633) 00:20:57.769 fused_ordering(634) 00:20:57.769 fused_ordering(635) 00:20:57.769 fused_ordering(636) 00:20:57.769 fused_ordering(637) 00:20:57.769 fused_ordering(638) 00:20:57.769 fused_ordering(639) 00:20:57.769 fused_ordering(640) 00:20:57.769 fused_ordering(641) 00:20:57.769 fused_ordering(642) 00:20:57.769 fused_ordering(643) 00:20:57.769 fused_ordering(644) 00:20:57.769 fused_ordering(645) 00:20:57.769 fused_ordering(646) 00:20:57.769 fused_ordering(647) 00:20:57.769 fused_ordering(648) 00:20:57.769 fused_ordering(649) 00:20:57.769 fused_ordering(650) 00:20:57.769 fused_ordering(651) 00:20:57.769 fused_ordering(652) 00:20:57.769 fused_ordering(653) 00:20:57.769 fused_ordering(654) 00:20:57.769 fused_ordering(655) 00:20:57.769 fused_ordering(656) 00:20:57.769 fused_ordering(657) 00:20:57.769 fused_ordering(658) 00:20:57.769 fused_ordering(659) 00:20:57.769 fused_ordering(660) 00:20:57.769 fused_ordering(661) 00:20:57.769 fused_ordering(662) 00:20:57.769 fused_ordering(663) 00:20:57.769 fused_ordering(664) 00:20:57.769 fused_ordering(665) 00:20:57.769 fused_ordering(666) 00:20:57.769 fused_ordering(667) 00:20:57.769 fused_ordering(668) 00:20:57.769 fused_ordering(669) 00:20:57.769 fused_ordering(670) 00:20:57.769 fused_ordering(671) 00:20:57.769 fused_ordering(672) 00:20:57.769 fused_ordering(673) 00:20:57.769 fused_ordering(674) 00:20:57.769 fused_ordering(675) 00:20:57.769 fused_ordering(676) 00:20:57.769 fused_ordering(677) 00:20:57.769 fused_ordering(678) 00:20:57.769 fused_ordering(679) 00:20:57.769 fused_ordering(680) 00:20:57.769 fused_ordering(681) 00:20:57.769 fused_ordering(682) 00:20:57.769 fused_ordering(683) 00:20:57.769 fused_ordering(684) 00:20:57.769 fused_ordering(685) 00:20:57.769 fused_ordering(686) 00:20:57.769 fused_ordering(687) 00:20:57.769 fused_ordering(688) 00:20:57.769 fused_ordering(689) 00:20:57.769 fused_ordering(690) 00:20:57.769 fused_ordering(691) 00:20:57.769 fused_ordering(692) 00:20:57.769 fused_ordering(693) 00:20:57.769 fused_ordering(694) 00:20:57.769 fused_ordering(695) 00:20:57.769 fused_ordering(696) 00:20:57.769 fused_ordering(697) 00:20:57.769 fused_ordering(698) 00:20:57.769 fused_ordering(699) 00:20:57.769 fused_ordering(700) 00:20:57.769 fused_ordering(701) 00:20:57.769 fused_ordering(702) 00:20:57.769 fused_ordering(703) 00:20:57.769 fused_ordering(704) 00:20:57.769 fused_ordering(705) 00:20:57.769 fused_ordering(706) 00:20:57.769 fused_ordering(707) 00:20:57.769 fused_ordering(708) 00:20:57.769 fused_ordering(709) 00:20:57.769 fused_ordering(710) 00:20:57.769 fused_ordering(711) 00:20:57.769 fused_ordering(712) 00:20:57.769 fused_ordering(713) 00:20:57.769 fused_ordering(714) 00:20:57.769 fused_ordering(715) 00:20:57.769 fused_ordering(716) 00:20:57.769 fused_ordering(717) 00:20:57.769 fused_ordering(718) 00:20:57.769 fused_ordering(719) 00:20:57.769 fused_ordering(720) 00:20:57.769 fused_ordering(721) 00:20:57.769 fused_ordering(722) 00:20:57.769 fused_ordering(723) 00:20:57.769 fused_ordering(724) 00:20:57.769 fused_ordering(725) 00:20:57.769 fused_ordering(726) 00:20:57.769 fused_ordering(727) 00:20:57.769 fused_ordering(728) 00:20:57.769 fused_ordering(729) 00:20:57.769 fused_ordering(730) 00:20:57.769 fused_ordering(731) 00:20:57.769 fused_ordering(732) 00:20:57.769 fused_ordering(733) 00:20:57.769 fused_ordering(734) 00:20:57.769 fused_ordering(735) 00:20:57.769 fused_ordering(736) 00:20:57.769 fused_ordering(737) 00:20:57.769 fused_ordering(738) 00:20:57.769 fused_ordering(739) 00:20:57.769 fused_ordering(740) 00:20:57.769 fused_ordering(741) 00:20:57.769 fused_ordering(742) 00:20:57.769 fused_ordering(743) 00:20:57.769 fused_ordering(744) 00:20:57.769 fused_ordering(745) 00:20:57.769 fused_ordering(746) 00:20:57.769 fused_ordering(747) 00:20:57.769 fused_ordering(748) 00:20:57.769 fused_ordering(749) 00:20:57.769 fused_ordering(750) 00:20:57.769 fused_ordering(751) 00:20:57.769 fused_ordering(752) 00:20:57.769 fused_ordering(753) 00:20:57.769 fused_ordering(754) 00:20:57.769 fused_ordering(755) 00:20:57.770 fused_ordering(756) 00:20:57.770 fused_ordering(757) 00:20:57.770 fused_ordering(758) 00:20:57.770 fused_ordering(759) 00:20:57.770 fused_ordering(760) 00:20:57.770 fused_ordering(761) 00:20:57.770 fused_ordering(762) 00:20:57.770 fused_ordering(763) 00:20:57.770 fused_ordering(764) 00:20:57.770 fused_ordering(765) 00:20:57.770 fused_ordering(766) 00:20:57.770 fused_ordering(767) 00:20:57.770 fused_ordering(768) 00:20:57.770 fused_ordering(769) 00:20:57.770 fused_ordering(770) 00:20:57.770 fused_ordering(771) 00:20:57.770 fused_ordering(772) 00:20:57.770 fused_ordering(773) 00:20:57.770 fused_ordering(774) 00:20:57.770 fused_ordering(775) 00:20:57.770 fused_ordering(776) 00:20:57.770 fused_ordering(777) 00:20:57.770 fused_ordering(778) 00:20:57.770 fused_ordering(779) 00:20:57.770 fused_ordering(780) 00:20:57.770 fused_ordering(781) 00:20:57.770 fused_ordering(782) 00:20:57.770 fused_ordering(783) 00:20:57.770 fused_ordering(784) 00:20:57.770 fused_ordering(785) 00:20:57.770 fused_ordering(786) 00:20:57.770 fused_ordering(787) 00:20:57.770 fused_ordering(788) 00:20:57.770 fused_ordering(789) 00:20:57.770 fused_ordering(790) 00:20:57.770 fused_ordering(791) 00:20:57.770 fused_ordering(792) 00:20:57.770 fused_ordering(793) 00:20:57.770 fused_ordering(794) 00:20:57.770 fused_ordering(795) 00:20:57.770 fused_ordering(796) 00:20:57.770 fused_ordering(797) 00:20:57.770 fused_ordering(798) 00:20:57.770 fused_ordering(799) 00:20:57.770 fused_ordering(800) 00:20:57.770 fused_ordering(801) 00:20:57.770 fused_ordering(802) 00:20:57.770 fused_ordering(803) 00:20:57.770 fused_ordering(804) 00:20:57.770 fused_ordering(805) 00:20:57.770 fused_ordering(806) 00:20:57.770 fused_ordering(807) 00:20:57.770 fused_ordering(808) 00:20:57.770 fused_ordering(809) 00:20:57.770 fused_ordering(810) 00:20:57.770 fused_ordering(811) 00:20:57.770 fused_ordering(812) 00:20:57.770 fused_ordering(813) 00:20:57.770 fused_ordering(814) 00:20:57.770 fused_ordering(815) 00:20:57.770 fused_ordering(816) 00:20:57.770 fused_ordering(817) 00:20:57.770 fused_ordering(818) 00:20:57.770 fused_ordering(819) 00:20:57.770 fused_ordering(820) 00:20:58.336 fused_ordering(821) 00:20:58.336 fused_ordering(822) 00:20:58.336 fused_ordering(823) 00:20:58.336 fused_ordering(824) 00:20:58.336 fused_ordering(825) 00:20:58.336 fused_ordering(826) 00:20:58.336 fused_ordering(827) 00:20:58.337 fused_ordering(828) 00:20:58.337 fused_ordering(829) 00:20:58.337 fused_ordering(830) 00:20:58.337 fused_ordering(831) 00:20:58.337 fused_ordering(832) 00:20:58.337 fused_ordering(833) 00:20:58.337 fused_ordering(834) 00:20:58.337 fused_ordering(835) 00:20:58.337 fused_ordering(836) 00:20:58.337 fused_ordering(837) 00:20:58.337 fused_ordering(838) 00:20:58.337 fused_ordering(839) 00:20:58.337 fused_ordering(840) 00:20:58.337 fused_ordering(841) 00:20:58.337 fused_ordering(842) 00:20:58.337 fused_ordering(843) 00:20:58.337 fused_ordering(844) 00:20:58.337 fused_ordering(845) 00:20:58.337 fused_ordering(846) 00:20:58.337 fused_ordering(847) 00:20:58.337 fused_ordering(848) 00:20:58.337 fused_ordering(849) 00:20:58.337 fused_ordering(850) 00:20:58.337 fused_ordering(851) 00:20:58.337 fused_ordering(852) 00:20:58.337 fused_ordering(853) 00:20:58.337 fused_ordering(854) 00:20:58.337 fused_ordering(855) 00:20:58.337 fused_ordering(856) 00:20:58.337 fused_ordering(857) 00:20:58.337 fused_ordering(858) 00:20:58.337 fused_ordering(859) 00:20:58.337 fused_ordering(860) 00:20:58.337 fused_ordering(861) 00:20:58.337 fused_ordering(862) 00:20:58.337 fused_ordering(863) 00:20:58.337 fused_ordering(864) 00:20:58.337 fused_ordering(865) 00:20:58.337 fused_ordering(866) 00:20:58.337 fused_ordering(867) 00:20:58.337 fused_ordering(868) 00:20:58.337 fused_ordering(869) 00:20:58.337 fused_ordering(870) 00:20:58.337 fused_ordering(871) 00:20:58.337 fused_ordering(872) 00:20:58.337 fused_ordering(873) 00:20:58.337 fused_ordering(874) 00:20:58.337 fused_ordering(875) 00:20:58.337 fused_ordering(876) 00:20:58.337 fused_ordering(877) 00:20:58.337 fused_ordering(878) 00:20:58.337 fused_ordering(879) 00:20:58.337 fused_ordering(880) 00:20:58.337 fused_ordering(881) 00:20:58.337 fused_ordering(882) 00:20:58.337 fused_ordering(883) 00:20:58.337 fused_ordering(884) 00:20:58.337 fused_ordering(885) 00:20:58.337 fused_ordering(886) 00:20:58.337 fused_ordering(887) 00:20:58.337 fused_ordering(888) 00:20:58.337 fused_ordering(889) 00:20:58.337 fused_ordering(890) 00:20:58.337 fused_ordering(891) 00:20:58.337 fused_ordering(892) 00:20:58.337 fused_ordering(893) 00:20:58.337 fused_ordering(894) 00:20:58.337 fused_ordering(895) 00:20:58.337 fused_ordering(896) 00:20:58.337 fused_ordering(897) 00:20:58.337 fused_ordering(898) 00:20:58.337 fused_ordering(899) 00:20:58.337 fused_ordering(900) 00:20:58.337 fused_ordering(901) 00:20:58.337 fused_ordering(902) 00:20:58.337 fused_ordering(903) 00:20:58.337 fused_ordering(904) 00:20:58.337 fused_ordering(905) 00:20:58.337 fused_ordering(906) 00:20:58.337 fused_ordering(907) 00:20:58.337 fused_ordering(908) 00:20:58.337 fused_ordering(909) 00:20:58.337 fused_ordering(910) 00:20:58.337 fused_ordering(911) 00:20:58.337 fused_ordering(912) 00:20:58.337 fused_ordering(913) 00:20:58.337 fused_ordering(914) 00:20:58.337 fused_ordering(915) 00:20:58.337 fused_ordering(916) 00:20:58.337 fused_ordering(917) 00:20:58.337 fused_ordering(918) 00:20:58.337 fused_ordering(919) 00:20:58.337 fused_ordering(920) 00:20:58.337 fused_ordering(921) 00:20:58.337 fused_ordering(922) 00:20:58.337 fused_ordering(923) 00:20:58.337 fused_ordering(924) 00:20:58.337 fused_ordering(925) 00:20:58.337 fused_ordering(926) 00:20:58.337 fused_ordering(927) 00:20:58.337 fused_ordering(928) 00:20:58.337 fused_ordering(929) 00:20:58.337 fused_ordering(930) 00:20:58.337 fused_ordering(931) 00:20:58.337 fused_ordering(932) 00:20:58.337 fused_ordering(933) 00:20:58.337 fused_ordering(934) 00:20:58.337 fused_ordering(935) 00:20:58.337 fused_ordering(936) 00:20:58.337 fused_ordering(937) 00:20:58.337 fused_ordering(938) 00:20:58.337 fused_ordering(939) 00:20:58.337 fused_ordering(940) 00:20:58.337 fused_ordering(941) 00:20:58.337 fused_ordering(942) 00:20:58.337 fused_ordering(943) 00:20:58.337 fused_ordering(944) 00:20:58.337 fused_ordering(945) 00:20:58.337 fused_ordering(946) 00:20:58.337 fused_ordering(947) 00:20:58.337 fused_ordering(948) 00:20:58.337 fused_ordering(949) 00:20:58.337 fused_ordering(950) 00:20:58.337 fused_ordering(951) 00:20:58.337 fused_ordering(952) 00:20:58.337 fused_ordering(953) 00:20:58.337 fused_ordering(954) 00:20:58.337 fused_ordering(955) 00:20:58.337 fused_ordering(956) 00:20:58.337 fused_ordering(957) 00:20:58.337 fused_ordering(958) 00:20:58.337 fused_ordering(959) 00:20:58.337 fused_ordering(960) 00:20:58.337 fused_ordering(961) 00:20:58.337 fused_ordering(962) 00:20:58.337 fused_ordering(963) 00:20:58.337 fused_ordering(964) 00:20:58.337 fused_ordering(965) 00:20:58.337 fused_ordering(966) 00:20:58.337 fused_ordering(967) 00:20:58.337 fused_ordering(968) 00:20:58.337 fused_ordering(969) 00:20:58.337 fused_ordering(970) 00:20:58.337 fused_ordering(971) 00:20:58.337 fused_ordering(972) 00:20:58.337 fused_ordering(973) 00:20:58.337 fused_ordering(974) 00:20:58.337 fused_ordering(975) 00:20:58.337 fused_ordering(976) 00:20:58.337 fused_ordering(977) 00:20:58.337 fused_ordering(978) 00:20:58.337 fused_ordering(979) 00:20:58.337 fused_ordering(980) 00:20:58.337 fused_ordering(981) 00:20:58.337 fused_ordering(982) 00:20:58.337 fused_ordering(983) 00:20:58.337 fused_ordering(984) 00:20:58.337 fused_ordering(985) 00:20:58.337 fused_ordering(986) 00:20:58.337 fused_ordering(987) 00:20:58.337 fused_ordering(988) 00:20:58.337 fused_ordering(989) 00:20:58.337 fused_ordering(990) 00:20:58.337 fused_ordering(991) 00:20:58.337 fused_ordering(992) 00:20:58.337 fused_ordering(993) 00:20:58.337 fused_ordering(994) 00:20:58.337 fused_ordering(995) 00:20:58.337 fused_ordering(996) 00:20:58.337 fused_ordering(997) 00:20:58.337 fused_ordering(998) 00:20:58.337 fused_ordering(999) 00:20:58.337 fused_ordering(1000) 00:20:58.337 fused_ordering(1001) 00:20:58.337 fused_ordering(1002) 00:20:58.337 fused_ordering(1003) 00:20:58.337 fused_ordering(1004) 00:20:58.337 fused_ordering(1005) 00:20:58.338 fused_ordering(1006) 00:20:58.338 fused_ordering(1007) 00:20:58.338 fused_ordering(1008) 00:20:58.338 fused_ordering(1009) 00:20:58.338 fused_ordering(1010) 00:20:58.338 fused_ordering(1011) 00:20:58.338 fused_ordering(1012) 00:20:58.338 fused_ordering(1013) 00:20:58.338 fused_ordering(1014) 00:20:58.338 fused_ordering(1015) 00:20:58.338 fused_ordering(1016) 00:20:58.338 fused_ordering(1017) 00:20:58.338 fused_ordering(1018) 00:20:58.338 fused_ordering(1019) 00:20:58.338 fused_ordering(1020) 00:20:58.338 fused_ordering(1021) 00:20:58.338 fused_ordering(1022) 00:20:58.338 fused_ordering(1023) 00:20:58.338 11:08:26 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:58.338 11:08:26 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:58.338 11:08:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:58.338 11:08:26 -- nvmf/common.sh@117 -- # sync 00:20:58.338 11:08:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:58.338 11:08:26 -- nvmf/common.sh@120 -- # set +e 00:20:58.338 11:08:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.338 11:08:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:58.338 rmmod nvme_tcp 00:20:58.338 rmmod nvme_fabrics 00:20:58.338 rmmod nvme_keyring 00:20:58.338 11:08:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.338 11:08:26 -- nvmf/common.sh@124 -- # set -e 00:20:58.338 11:08:26 -- nvmf/common.sh@125 -- # return 0 00:20:58.338 11:08:26 -- nvmf/common.sh@478 -- # '[' -n 86646 ']' 00:20:58.338 11:08:26 -- nvmf/common.sh@479 -- # killprocess 86646 00:20:58.338 11:08:26 -- common/autotest_common.sh@936 -- # '[' -z 86646 ']' 00:20:58.338 11:08:26 -- common/autotest_common.sh@940 -- # kill -0 86646 00:20:58.338 11:08:26 -- common/autotest_common.sh@941 -- # uname 00:20:58.338 11:08:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:58.338 11:08:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86646 00:20:58.338 11:08:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:58.338 11:08:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:58.338 killing process with pid 86646 00:20:58.338 11:08:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86646' 00:20:58.338 11:08:26 -- common/autotest_common.sh@955 -- # kill 86646 00:20:58.338 11:08:26 -- common/autotest_common.sh@960 -- # wait 86646 00:20:58.596 11:08:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:58.596 11:08:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:58.596 11:08:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:58.596 11:08:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.596 11:08:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:58.596 11:08:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.596 11:08:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.596 11:08:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.596 11:08:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:58.596 00:20:58.596 real 0m4.229s 00:20:58.596 user 0m5.124s 00:20:58.596 sys 0m1.423s 00:20:58.596 11:08:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:58.596 ************************************ 00:20:58.596 END TEST nvmf_fused_ordering 00:20:58.596 ************************************ 00:20:58.596 11:08:27 -- common/autotest_common.sh@10 -- # set +x 00:20:58.596 11:08:27 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:58.596 11:08:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:58.596 11:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:58.596 11:08:27 -- common/autotest_common.sh@10 -- # set +x 00:20:58.596 ************************************ 00:20:58.596 START TEST nvmf_delete_subsystem 00:20:58.597 ************************************ 00:20:58.597 11:08:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:58.855 * Looking for test storage... 00:20:58.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:58.855 11:08:27 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.855 11:08:27 -- nvmf/common.sh@7 -- # uname -s 00:20:58.856 11:08:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.856 11:08:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.856 11:08:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.856 11:08:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.856 11:08:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.856 11:08:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.856 11:08:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.856 11:08:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.856 11:08:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.856 11:08:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.856 11:08:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:58.856 11:08:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:20:58.856 11:08:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.856 11:08:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.856 11:08:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.856 11:08:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.856 11:08:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.856 11:08:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.856 11:08:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.856 11:08:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.856 11:08:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.856 11:08:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.856 11:08:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.856 11:08:27 -- paths/export.sh@5 -- # export PATH 00:20:58.856 11:08:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.856 11:08:27 -- nvmf/common.sh@47 -- # : 0 00:20:58.856 11:08:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.856 11:08:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.856 11:08:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.856 11:08:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.856 11:08:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.856 11:08:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.856 11:08:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.856 11:08:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.856 11:08:27 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:58.856 11:08:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:58.856 11:08:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.856 11:08:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:58.856 11:08:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:58.856 11:08:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:58.856 11:08:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.856 11:08:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.856 11:08:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.856 11:08:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:58.856 11:08:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:58.856 11:08:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:58.856 11:08:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:58.856 11:08:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:58.856 11:08:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:58.856 11:08:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.856 11:08:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.856 11:08:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:58.856 11:08:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:58.856 11:08:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.856 11:08:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.856 11:08:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.856 11:08:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.856 11:08:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.856 11:08:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.856 11:08:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.856 11:08:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.856 11:08:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:58.856 11:08:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:58.856 Cannot find device "nvmf_tgt_br" 00:20:58.856 11:08:27 -- nvmf/common.sh@155 -- # true 00:20:58.856 11:08:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.856 Cannot find device "nvmf_tgt_br2" 00:20:58.856 11:08:27 -- nvmf/common.sh@156 -- # true 00:20:58.856 11:08:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:58.856 11:08:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:58.856 Cannot find device "nvmf_tgt_br" 00:20:58.856 11:08:27 -- nvmf/common.sh@158 -- # true 00:20:58.856 11:08:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:58.856 Cannot find device "nvmf_tgt_br2" 00:20:58.856 11:08:27 -- nvmf/common.sh@159 -- # true 00:20:58.856 11:08:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:58.856 11:08:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:58.856 11:08:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.856 11:08:27 -- nvmf/common.sh@162 -- # true 00:20:58.856 11:08:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.856 11:08:27 -- nvmf/common.sh@163 -- # true 00:20:58.856 11:08:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:58.856 11:08:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:58.856 11:08:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:58.856 11:08:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:58.856 11:08:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:58.856 11:08:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:59.114 11:08:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:59.114 11:08:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:59.114 11:08:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:59.114 11:08:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:59.114 11:08:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:59.114 11:08:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:59.114 11:08:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:59.114 11:08:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:59.114 11:08:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:59.114 11:08:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:59.114 11:08:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:59.114 11:08:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:59.114 11:08:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:59.114 11:08:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:59.114 11:08:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:59.115 11:08:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:59.115 11:08:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:59.115 11:08:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:59.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:20:59.115 00:20:59.115 --- 10.0.0.2 ping statistics --- 00:20:59.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.115 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:59.115 11:08:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:59.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:59.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:59.115 00:20:59.115 --- 10.0.0.3 ping statistics --- 00:20:59.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.115 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:59.115 11:08:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:59.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:59.115 00:20:59.115 --- 10.0.0.1 ping statistics --- 00:20:59.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.115 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:59.115 11:08:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.115 11:08:27 -- nvmf/common.sh@422 -- # return 0 00:20:59.115 11:08:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:59.115 11:08:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.115 11:08:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:59.115 11:08:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:59.115 11:08:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.115 11:08:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:59.115 11:08:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:59.115 11:08:27 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:59.115 11:08:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:59.115 11:08:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:59.115 11:08:27 -- common/autotest_common.sh@10 -- # set +x 00:20:59.115 11:08:27 -- nvmf/common.sh@470 -- # nvmfpid=86916 00:20:59.115 11:08:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:59.115 11:08:27 -- nvmf/common.sh@471 -- # waitforlisten 86916 00:20:59.115 11:08:27 -- common/autotest_common.sh@817 -- # '[' -z 86916 ']' 00:20:59.115 11:08:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.115 11:08:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:59.115 11:08:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.115 11:08:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:59.115 11:08:27 -- common/autotest_common.sh@10 -- # set +x 00:20:59.115 [2024-04-18 11:08:27.711745] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:59.115 [2024-04-18 11:08:27.711841] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.373 [2024-04-18 11:08:27.851731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:59.373 [2024-04-18 11:08:27.951601] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.373 [2024-04-18 11:08:27.951662] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.373 [2024-04-18 11:08:27.951674] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.373 [2024-04-18 11:08:27.951682] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.373 [2024-04-18 11:08:27.951690] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.373 [2024-04-18 11:08:27.951860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.373 [2024-04-18 11:08:27.951870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.308 11:08:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:00.308 11:08:28 -- common/autotest_common.sh@850 -- # return 0 00:21:00.308 11:08:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:00.308 11:08:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:00.308 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.308 11:08:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.308 11:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.308 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.308 [2024-04-18 11:08:28.702543] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.308 11:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:00.308 11:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.308 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.308 11:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.308 11:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.308 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.308 [2024-04-18 11:08:28.722653] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.308 11:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:00.308 11:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.308 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.308 NULL1 00:21:00.308 11:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:00.308 11:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.308 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.308 Delay0 00:21:00.308 11:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:00.308 11:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.308 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.308 11:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@28 -- # perf_pid=86967 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:21:00.308 11:08:28 -- target/delete_subsystem.sh@30 -- # sleep 2 00:21:00.308 [2024-04-18 11:08:28.943391] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.215 11:08:30 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.215 11:08:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.215 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 starting I/O failed: -6 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 [2024-04-18 11:08:30.978652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2b70 is same with the state(5) to be set 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Read completed with error (sct=0, sc=8) 00:21:02.474 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 starting I/O failed: -6 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 [2024-04-18 11:08:30.980940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8fe4000c00 is same with the state(5) to be set 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Write completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 Read completed with error (sct=0, sc=8) 00:21:02.475 [2024-04-18 11:08:30.982112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8fe400c250 is same with the state(5) to be set 00:21:03.408 [2024-04-18 11:08:31.957523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a9f20 is same with the state(5) to be set 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.408 Write completed with error (sct=0, sc=8) 00:21:03.408 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 [2024-04-18 11:08:31.977677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8fe400c510 is same with the state(5) to be set 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 [2024-04-18 11:08:31.978272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8fe400bf90 is same with the state(5) to be set 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 [2024-04-18 11:08:31.979624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a46e0 is same with the state(5) to be set 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Write completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 Read completed with error (sct=0, sc=8) 00:21:03.409 [2024-04-18 11:08:31.980104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4b30 is same with the state(5) to be set 00:21:03.409 [2024-04-18 11:08:31.980999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a9f20 (9): Bad file descriptor 00:21:03.409 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:03.409 11:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.409 11:08:31 -- target/delete_subsystem.sh@34 -- # delay=0 00:21:03.409 11:08:31 -- target/delete_subsystem.sh@35 -- # kill -0 86967 00:21:03.409 11:08:31 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:21:03.409 Initializing NVMe Controllers 00:21:03.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.409 Controller IO queue size 128, less than required. 00:21:03.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:03.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:21:03.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:21:03.409 Initialization complete. Launching workers. 00:21:03.409 ======================================================== 00:21:03.409 Latency(us) 00:21:03.409 Device Information : IOPS MiB/s Average min max 00:21:03.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.80 0.09 881355.48 463.21 1010609.97 00:21:03.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 143.03 0.07 1013492.69 1184.58 1998694.63 00:21:03.409 ======================================================== 00:21:03.409 Total : 319.83 0.16 940447.90 463.21 1998694.63 00:21:03.409 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@35 -- # kill -0 86967 00:21:03.977 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (86967) - No such process 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@45 -- # NOT wait 86967 00:21:03.977 11:08:32 -- common/autotest_common.sh@638 -- # local es=0 00:21:03.977 11:08:32 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 86967 00:21:03.977 11:08:32 -- common/autotest_common.sh@626 -- # local arg=wait 00:21:03.977 11:08:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:03.977 11:08:32 -- common/autotest_common.sh@630 -- # type -t wait 00:21:03.977 11:08:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:03.977 11:08:32 -- common/autotest_common.sh@641 -- # wait 86967 00:21:03.977 11:08:32 -- common/autotest_common.sh@641 -- # es=1 00:21:03.977 11:08:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:03.977 11:08:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:03.977 11:08:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:03.977 11:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.977 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 11:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.977 11:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.977 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 [2024-04-18 11:08:32.508946] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.977 11:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:03.977 11:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.977 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 11:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@54 -- # perf_pid=87014 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@56 -- # delay=0 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:03.977 11:08:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:04.235 [2024-04-18 11:08:32.685545] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:04.494 11:08:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:04.494 11:08:33 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:04.494 11:08:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:05.107 11:08:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:05.107 11:08:33 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:05.107 11:08:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:05.672 11:08:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:05.672 11:08:34 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:05.672 11:08:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:05.930 11:08:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:05.930 11:08:34 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:05.930 11:08:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:06.495 11:08:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:06.495 11:08:35 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:06.495 11:08:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:07.059 11:08:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:07.059 11:08:35 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:07.059 11:08:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:07.317 Initializing NVMe Controllers 00:21:07.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.317 Controller IO queue size 128, less than required. 00:21:07.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:07.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:21:07.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:21:07.317 Initialization complete. Launching workers. 00:21:07.317 ======================================================== 00:21:07.317 Latency(us) 00:21:07.317 Device Information : IOPS MiB/s Average min max 00:21:07.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003281.46 1000129.65 1010085.27 00:21:07.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006015.62 1000217.28 1041922.29 00:21:07.317 ======================================================== 00:21:07.317 Total : 256.00 0.12 1004648.54 1000129.65 1041922.29 00:21:07.317 00:21:07.575 11:08:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:07.575 11:08:36 -- target/delete_subsystem.sh@57 -- # kill -0 87014 00:21:07.575 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (87014) - No such process 00:21:07.575 11:08:36 -- target/delete_subsystem.sh@67 -- # wait 87014 00:21:07.575 11:08:36 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:07.575 11:08:36 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:21:07.575 11:08:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:07.575 11:08:36 -- nvmf/common.sh@117 -- # sync 00:21:07.575 11:08:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.575 11:08:36 -- nvmf/common.sh@120 -- # set +e 00:21:07.575 11:08:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.575 11:08:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.575 rmmod nvme_tcp 00:21:07.575 rmmod nvme_fabrics 00:21:07.575 rmmod nvme_keyring 00:21:07.575 11:08:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.575 11:08:36 -- nvmf/common.sh@124 -- # set -e 00:21:07.575 11:08:36 -- nvmf/common.sh@125 -- # return 0 00:21:07.575 11:08:36 -- nvmf/common.sh@478 -- # '[' -n 86916 ']' 00:21:07.575 11:08:36 -- nvmf/common.sh@479 -- # killprocess 86916 00:21:07.575 11:08:36 -- common/autotest_common.sh@936 -- # '[' -z 86916 ']' 00:21:07.575 11:08:36 -- common/autotest_common.sh@940 -- # kill -0 86916 00:21:07.575 11:08:36 -- common/autotest_common.sh@941 -- # uname 00:21:07.575 11:08:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:07.575 11:08:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86916 00:21:07.575 killing process with pid 86916 00:21:07.575 11:08:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:07.575 11:08:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:07.575 11:08:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86916' 00:21:07.575 11:08:36 -- common/autotest_common.sh@955 -- # kill 86916 00:21:07.575 11:08:36 -- common/autotest_common.sh@960 -- # wait 86916 00:21:07.833 11:08:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:07.833 11:08:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:07.833 11:08:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:07.833 11:08:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.833 11:08:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.833 11:08:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.833 11:08:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.833 11:08:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.833 11:08:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:07.833 00:21:07.833 real 0m9.209s 00:21:07.833 user 0m28.655s 00:21:07.833 sys 0m1.468s 00:21:07.833 11:08:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:07.833 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:21:07.833 ************************************ 00:21:07.833 END TEST nvmf_delete_subsystem 00:21:07.833 ************************************ 00:21:07.833 11:08:36 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:21:07.833 11:08:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:07.833 11:08:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:07.833 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:21:08.092 ************************************ 00:21:08.092 START TEST nvmf_ns_masking 00:21:08.092 ************************************ 00:21:08.092 11:08:36 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:21:08.092 * Looking for test storage... 00:21:08.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:08.092 11:08:36 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.092 11:08:36 -- nvmf/common.sh@7 -- # uname -s 00:21:08.092 11:08:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.092 11:08:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.092 11:08:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.092 11:08:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.092 11:08:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.092 11:08:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.092 11:08:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.092 11:08:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.092 11:08:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.092 11:08:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.092 11:08:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:08.092 11:08:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:08.092 11:08:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.092 11:08:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.092 11:08:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.092 11:08:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.092 11:08:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.092 11:08:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.092 11:08:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.092 11:08:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.092 11:08:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.092 11:08:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.092 11:08:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.092 11:08:36 -- paths/export.sh@5 -- # export PATH 00:21:08.092 11:08:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.092 11:08:36 -- nvmf/common.sh@47 -- # : 0 00:21:08.092 11:08:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.092 11:08:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.092 11:08:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.092 11:08:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.092 11:08:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.092 11:08:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.092 11:08:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.092 11:08:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.092 11:08:36 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.092 11:08:36 -- target/ns_masking.sh@11 -- # loops=5 00:21:08.092 11:08:36 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:21:08.092 11:08:36 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:21:08.092 11:08:36 -- target/ns_masking.sh@15 -- # uuidgen 00:21:08.092 11:08:36 -- target/ns_masking.sh@15 -- # HOSTID=587f3fe7-b5a7-4784-8b08-c4451beb925f 00:21:08.092 11:08:36 -- target/ns_masking.sh@44 -- # nvmftestinit 00:21:08.092 11:08:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:08.092 11:08:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.092 11:08:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:08.092 11:08:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:08.092 11:08:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:08.092 11:08:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.092 11:08:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.092 11:08:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.092 11:08:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:08.092 11:08:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:08.092 11:08:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:08.092 11:08:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:08.092 11:08:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:08.092 11:08:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:08.092 11:08:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.092 11:08:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.092 11:08:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:08.092 11:08:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:08.092 11:08:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.092 11:08:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.092 11:08:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.092 11:08:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.092 11:08:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.092 11:08:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.092 11:08:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.092 11:08:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.092 11:08:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:08.092 11:08:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:08.092 Cannot find device "nvmf_tgt_br" 00:21:08.092 11:08:36 -- nvmf/common.sh@155 -- # true 00:21:08.092 11:08:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.092 Cannot find device "nvmf_tgt_br2" 00:21:08.092 11:08:36 -- nvmf/common.sh@156 -- # true 00:21:08.092 11:08:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:08.092 11:08:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:08.092 Cannot find device "nvmf_tgt_br" 00:21:08.092 11:08:36 -- nvmf/common.sh@158 -- # true 00:21:08.092 11:08:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:08.092 Cannot find device "nvmf_tgt_br2" 00:21:08.092 11:08:36 -- nvmf/common.sh@159 -- # true 00:21:08.092 11:08:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:08.351 11:08:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:08.351 11:08:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.351 11:08:36 -- nvmf/common.sh@162 -- # true 00:21:08.351 11:08:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.351 11:08:36 -- nvmf/common.sh@163 -- # true 00:21:08.351 11:08:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.351 11:08:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.351 11:08:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.351 11:08:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.351 11:08:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.351 11:08:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.351 11:08:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.351 11:08:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:08.351 11:08:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:08.351 11:08:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:08.351 11:08:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:08.351 11:08:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:08.351 11:08:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:08.351 11:08:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.351 11:08:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.351 11:08:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.351 11:08:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:08.351 11:08:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:08.351 11:08:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.351 11:08:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.351 11:08:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.351 11:08:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.351 11:08:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.351 11:08:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:08.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:21:08.351 00:21:08.351 --- 10.0.0.2 ping statistics --- 00:21:08.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.351 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:08.351 11:08:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:08.351 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.351 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:08.351 00:21:08.351 --- 10.0.0.3 ping statistics --- 00:21:08.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.351 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:08.351 11:08:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:08.351 00:21:08.351 --- 10.0.0.1 ping statistics --- 00:21:08.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.351 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:08.351 11:08:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.351 11:08:36 -- nvmf/common.sh@422 -- # return 0 00:21:08.351 11:08:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:08.351 11:08:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.351 11:08:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:08.351 11:08:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:08.351 11:08:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.351 11:08:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:08.351 11:08:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:08.351 11:08:36 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:21:08.351 11:08:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:08.351 11:08:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:08.351 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:21:08.351 11:08:36 -- nvmf/common.sh@470 -- # nvmfpid=87258 00:21:08.351 11:08:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:08.351 11:08:36 -- nvmf/common.sh@471 -- # waitforlisten 87258 00:21:08.351 11:08:36 -- common/autotest_common.sh@817 -- # '[' -z 87258 ']' 00:21:08.351 11:08:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.351 11:08:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.351 11:08:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.351 11:08:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.351 11:08:36 -- common/autotest_common.sh@10 -- # set +x 00:21:08.609 [2024-04-18 11:08:37.035800] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:08.609 [2024-04-18 11:08:37.035890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.609 [2024-04-18 11:08:37.177224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.884 [2024-04-18 11:08:37.281979] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.884 [2024-04-18 11:08:37.282055] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.884 [2024-04-18 11:08:37.282071] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.884 [2024-04-18 11:08:37.282082] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.884 [2024-04-18 11:08:37.282091] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.884 [2024-04-18 11:08:37.282307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.884 [2024-04-18 11:08:37.282400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.884 [2024-04-18 11:08:37.282879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.884 [2024-04-18 11:08:37.282831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.472 11:08:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.472 11:08:38 -- common/autotest_common.sh@850 -- # return 0 00:21:09.472 11:08:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:09.472 11:08:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:09.472 11:08:38 -- common/autotest_common.sh@10 -- # set +x 00:21:09.730 11:08:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.730 11:08:38 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:09.730 [2024-04-18 11:08:38.346949] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.987 11:08:38 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:21:09.987 11:08:38 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:21:09.987 11:08:38 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:10.246 Malloc1 00:21:10.246 11:08:38 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:10.504 Malloc2 00:21:10.504 11:08:38 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:10.762 11:08:39 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:10.762 11:08:39 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.020 [2024-04-18 11:08:39.640355] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.278 11:08:39 -- target/ns_masking.sh@61 -- # connect 00:21:11.278 11:08:39 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 587f3fe7-b5a7-4784-8b08-c4451beb925f -a 10.0.0.2 -s 4420 -i 4 00:21:11.278 11:08:39 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:21:11.278 11:08:39 -- common/autotest_common.sh@1184 -- # local i=0 00:21:11.278 11:08:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:11.278 11:08:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:11.278 11:08:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:13.177 11:08:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:13.177 11:08:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:13.177 11:08:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:13.177 11:08:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:13.177 11:08:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.177 11:08:41 -- common/autotest_common.sh@1194 -- # return 0 00:21:13.177 11:08:41 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:13.177 11:08:41 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:13.435 11:08:41 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:13.435 11:08:41 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:13.435 11:08:41 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:21:13.435 11:08:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:13.435 11:08:41 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:13.435 [ 0]:0x1 00:21:13.435 11:08:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:13.435 11:08:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:13.435 11:08:41 -- target/ns_masking.sh@40 -- # nguid=15e7017c73914d53a8ea426fa814ee5d 00:21:13.435 11:08:41 -- target/ns_masking.sh@41 -- # [[ 15e7017c73914d53a8ea426fa814ee5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.435 11:08:41 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:13.693 11:08:42 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:21:13.693 11:08:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:13.693 11:08:42 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:13.693 [ 0]:0x1 00:21:13.693 11:08:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:13.693 11:08:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:13.693 11:08:42 -- target/ns_masking.sh@40 -- # nguid=15e7017c73914d53a8ea426fa814ee5d 00:21:13.693 11:08:42 -- target/ns_masking.sh@41 -- # [[ 15e7017c73914d53a8ea426fa814ee5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.693 11:08:42 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:21:13.693 11:08:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:13.693 11:08:42 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:13.693 [ 1]:0x2 00:21:13.693 11:08:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:13.693 11:08:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:13.953 11:08:42 -- target/ns_masking.sh@40 -- # nguid=8c64e59110bb4bbdbaa07e9e8a02bcee 00:21:13.953 11:08:42 -- target/ns_masking.sh@41 -- # [[ 8c64e59110bb4bbdbaa07e9e8a02bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:13.953 11:08:42 -- target/ns_masking.sh@69 -- # disconnect 00:21:13.953 11:08:42 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:13.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:13.953 11:08:42 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:14.212 11:08:42 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:14.470 11:08:42 -- target/ns_masking.sh@77 -- # connect 1 00:21:14.470 11:08:42 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 587f3fe7-b5a7-4784-8b08-c4451beb925f -a 10.0.0.2 -s 4420 -i 4 00:21:14.470 11:08:42 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:14.470 11:08:42 -- common/autotest_common.sh@1184 -- # local i=0 00:21:14.470 11:08:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:14.470 11:08:42 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:21:14.470 11:08:42 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:21:14.470 11:08:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:16.368 11:08:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:16.368 11:08:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:16.368 11:08:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:16.368 11:08:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:16.368 11:08:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:16.368 11:08:44 -- common/autotest_common.sh@1194 -- # return 0 00:21:16.368 11:08:44 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:16.368 11:08:44 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:16.626 11:08:45 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:16.626 11:08:45 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:16.626 11:08:45 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:21:16.626 11:08:45 -- common/autotest_common.sh@638 -- # local es=0 00:21:16.626 11:08:45 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:16.626 11:08:45 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:16.626 11:08:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:16.626 11:08:45 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:16.626 11:08:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:16.626 11:08:45 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:16.626 11:08:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:16.626 11:08:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:16.626 11:08:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:16.626 11:08:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:16.626 11:08:45 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:16.626 11:08:45 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:16.626 11:08:45 -- common/autotest_common.sh@641 -- # es=1 00:21:16.626 11:08:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:16.626 11:08:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:16.626 11:08:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:16.626 11:08:45 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:21:16.626 11:08:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:16.626 11:08:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:16.626 [ 0]:0x2 00:21:16.626 11:08:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:16.626 11:08:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:16.626 11:08:45 -- target/ns_masking.sh@40 -- # nguid=8c64e59110bb4bbdbaa07e9e8a02bcee 00:21:16.626 11:08:45 -- target/ns_masking.sh@41 -- # [[ 8c64e59110bb4bbdbaa07e9e8a02bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:16.626 11:08:45 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:16.884 11:08:45 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:21:16.884 11:08:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:16.884 11:08:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:16.884 [ 0]:0x1 00:21:16.884 11:08:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:16.884 11:08:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:17.141 11:08:45 -- target/ns_masking.sh@40 -- # nguid=15e7017c73914d53a8ea426fa814ee5d 00:21:17.141 11:08:45 -- target/ns_masking.sh@41 -- # [[ 15e7017c73914d53a8ea426fa814ee5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.141 11:08:45 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:21:17.141 11:08:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.141 11:08:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:17.141 [ 1]:0x2 00:21:17.141 11:08:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:17.141 11:08:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.141 11:08:45 -- target/ns_masking.sh@40 -- # nguid=8c64e59110bb4bbdbaa07e9e8a02bcee 00:21:17.141 11:08:45 -- target/ns_masking.sh@41 -- # [[ 8c64e59110bb4bbdbaa07e9e8a02bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.141 11:08:45 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:17.399 11:08:45 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:21:17.399 11:08:45 -- common/autotest_common.sh@638 -- # local es=0 00:21:17.399 11:08:45 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:17.399 11:08:45 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:17.399 11:08:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.399 11:08:45 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:17.399 11:08:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.399 11:08:45 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:17.399 11:08:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.399 11:08:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:17.399 11:08:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:17.399 11:08:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.399 11:08:45 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:17.399 11:08:45 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.399 11:08:45 -- common/autotest_common.sh@641 -- # es=1 00:21:17.399 11:08:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:17.399 11:08:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:17.399 11:08:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:17.399 11:08:45 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:21:17.399 11:08:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.399 11:08:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:17.399 [ 0]:0x2 00:21:17.399 11:08:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.399 11:08:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:17.399 11:08:45 -- target/ns_masking.sh@40 -- # nguid=8c64e59110bb4bbdbaa07e9e8a02bcee 00:21:17.399 11:08:45 -- target/ns_masking.sh@41 -- # [[ 8c64e59110bb4bbdbaa07e9e8a02bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.399 11:08:45 -- target/ns_masking.sh@91 -- # disconnect 00:21:17.399 11:08:45 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:17.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:17.399 11:08:46 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:17.657 11:08:46 -- target/ns_masking.sh@95 -- # connect 2 00:21:17.657 11:08:46 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 587f3fe7-b5a7-4784-8b08-c4451beb925f -a 10.0.0.2 -s 4420 -i 4 00:21:17.915 11:08:46 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:17.915 11:08:46 -- common/autotest_common.sh@1184 -- # local i=0 00:21:17.915 11:08:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:17.915 11:08:46 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:21:17.915 11:08:46 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:21:17.915 11:08:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:19.814 11:08:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:19.814 11:08:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:19.814 11:08:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:19.814 11:08:48 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:21:19.814 11:08:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.814 11:08:48 -- common/autotest_common.sh@1194 -- # return 0 00:21:19.814 11:08:48 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:19.814 11:08:48 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:19.814 11:08:48 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:19.814 11:08:48 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:19.814 11:08:48 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:21:19.814 11:08:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:19.814 11:08:48 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:19.814 [ 0]:0x1 00:21:20.071 11:08:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:20.071 11:08:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:20.071 11:08:48 -- target/ns_masking.sh@40 -- # nguid=15e7017c73914d53a8ea426fa814ee5d 00:21:20.071 11:08:48 -- target/ns_masking.sh@41 -- # [[ 15e7017c73914d53a8ea426fa814ee5d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:20.071 11:08:48 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:21:20.071 11:08:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:20.071 11:08:48 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:20.071 [ 1]:0x2 00:21:20.071 11:08:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:20.071 11:08:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:20.071 11:08:48 -- target/ns_masking.sh@40 -- # nguid=8c64e59110bb4bbdbaa07e9e8a02bcee 00:21:20.071 11:08:48 -- target/ns_masking.sh@41 -- # [[ 8c64e59110bb4bbdbaa07e9e8a02bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:20.071 11:08:48 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:20.329 11:08:48 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:21:20.329 11:08:48 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.329 11:08:48 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:20.329 11:08:48 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:20.329 11:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.329 11:08:48 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:20.329 11:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.329 11:08:48 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:20.329 11:08:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:20.329 11:08:48 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:20.329 11:08:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:20.329 11:08:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:20.329 11:08:48 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:20.329 11:08:48 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:20.329 11:08:48 -- common/autotest_common.sh@641 -- # es=1 00:21:20.329 11:08:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.329 11:08:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.329 11:08:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.329 11:08:48 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:21:20.329 11:08:48 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:20.329 11:08:48 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:20.329 [ 0]:0x2 00:21:20.329 11:08:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:20.329 11:08:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:20.587 11:08:48 -- target/ns_masking.sh@40 -- # nguid=8c64e59110bb4bbdbaa07e9e8a02bcee 00:21:20.587 11:08:48 -- target/ns_masking.sh@41 -- # [[ 8c64e59110bb4bbdbaa07e9e8a02bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:20.587 11:08:48 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:20.587 11:08:48 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.587 11:08:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:20.587 11:08:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:20.587 11:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.587 11:08:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:20.587 11:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.587 11:08:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:20.587 11:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.587 11:08:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:20.587 11:08:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:20.587 11:08:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:20.846 [2024-04-18 11:08:49.246478] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:20.846 2024/04/18 11:08:49 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:21:20.846 request: 00:21:20.846 { 00:21:20.846 "method": "nvmf_ns_remove_host", 00:21:20.846 "params": { 00:21:20.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.846 "nsid": 2, 00:21:20.846 "host": "nqn.2016-06.io.spdk:host1" 00:21:20.846 } 00:21:20.846 } 00:21:20.846 Got JSON-RPC error response 00:21:20.846 GoRPCClient: error on JSON-RPC call 00:21:20.846 11:08:49 -- common/autotest_common.sh@641 -- # es=1 00:21:20.846 11:08:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.846 11:08:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.846 11:08:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.846 11:08:49 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:21:20.846 11:08:49 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.846 11:08:49 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:20.846 11:08:49 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:20.846 11:08:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.846 11:08:49 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:20.846 11:08:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.846 11:08:49 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:20.846 11:08:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:20.846 11:08:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:20.846 11:08:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:20.846 11:08:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:20.846 11:08:49 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:20.846 11:08:49 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:20.846 11:08:49 -- common/autotest_common.sh@641 -- # es=1 00:21:20.846 11:08:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.846 11:08:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.846 11:08:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.846 11:08:49 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:21:20.846 11:08:49 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:20.846 11:08:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:20.846 [ 0]:0x2 00:21:20.846 11:08:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:20.846 11:08:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:20.846 11:08:49 -- target/ns_masking.sh@40 -- # nguid=8c64e59110bb4bbdbaa07e9e8a02bcee 00:21:20.846 11:08:49 -- target/ns_masking.sh@41 -- # [[ 8c64e59110bb4bbdbaa07e9e8a02bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:20.846 11:08:49 -- target/ns_masking.sh@108 -- # disconnect 00:21:20.846 11:08:49 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:20.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:20.846 11:08:49 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.106 11:08:49 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:21.106 11:08:49 -- target/ns_masking.sh@114 -- # nvmftestfini 00:21:21.106 11:08:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:21.106 11:08:49 -- nvmf/common.sh@117 -- # sync 00:21:21.364 11:08:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.364 11:08:49 -- nvmf/common.sh@120 -- # set +e 00:21:21.364 11:08:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.364 11:08:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.364 rmmod nvme_tcp 00:21:21.364 rmmod nvme_fabrics 00:21:21.364 rmmod nvme_keyring 00:21:21.364 11:08:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.364 11:08:49 -- nvmf/common.sh@124 -- # set -e 00:21:21.364 11:08:49 -- nvmf/common.sh@125 -- # return 0 00:21:21.364 11:08:49 -- nvmf/common.sh@478 -- # '[' -n 87258 ']' 00:21:21.364 11:08:49 -- nvmf/common.sh@479 -- # killprocess 87258 00:21:21.364 11:08:49 -- common/autotest_common.sh@936 -- # '[' -z 87258 ']' 00:21:21.364 11:08:49 -- common/autotest_common.sh@940 -- # kill -0 87258 00:21:21.364 11:08:49 -- common/autotest_common.sh@941 -- # uname 00:21:21.364 11:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.364 11:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87258 00:21:21.364 killing process with pid 87258 00:21:21.364 11:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:21.364 11:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:21.364 11:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87258' 00:21:21.364 11:08:49 -- common/autotest_common.sh@955 -- # kill 87258 00:21:21.364 11:08:49 -- common/autotest_common.sh@960 -- # wait 87258 00:21:21.622 11:08:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:21.622 11:08:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:21.622 11:08:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:21.622 11:08:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:21.622 11:08:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:21.622 11:08:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.622 11:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.622 11:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.622 11:08:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:21.622 00:21:21.622 real 0m13.716s 00:21:21.622 user 0m54.887s 00:21:21.622 sys 0m2.384s 00:21:21.622 11:08:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:21.622 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:21:21.622 ************************************ 00:21:21.622 END TEST nvmf_ns_masking 00:21:21.622 ************************************ 00:21:21.880 11:08:50 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:21:21.880 11:08:50 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:21:21.880 11:08:50 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:21.880 11:08:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:21.880 11:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:21.880 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:21:21.880 ************************************ 00:21:21.880 START TEST nvmf_host_management 00:21:21.880 ************************************ 00:21:21.880 11:08:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:21.880 * Looking for test storage... 00:21:21.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:21.880 11:08:50 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:21.880 11:08:50 -- nvmf/common.sh@7 -- # uname -s 00:21:21.880 11:08:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.880 11:08:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.880 11:08:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.880 11:08:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.880 11:08:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.880 11:08:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.880 11:08:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.880 11:08:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.880 11:08:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.880 11:08:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.880 11:08:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:21.880 11:08:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:21.880 11:08:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.880 11:08:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.880 11:08:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:21.880 11:08:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.880 11:08:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:21.880 11:08:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.880 11:08:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.880 11:08:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.880 11:08:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.880 11:08:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.880 11:08:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.880 11:08:50 -- paths/export.sh@5 -- # export PATH 00:21:21.880 11:08:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.880 11:08:50 -- nvmf/common.sh@47 -- # : 0 00:21:21.880 11:08:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.880 11:08:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.880 11:08:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.880 11:08:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.880 11:08:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.880 11:08:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.880 11:08:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.880 11:08:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.880 11:08:50 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.880 11:08:50 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.880 11:08:50 -- target/host_management.sh@105 -- # nvmftestinit 00:21:21.880 11:08:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:21.880 11:08:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.880 11:08:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:21.880 11:08:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:21.880 11:08:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:21.880 11:08:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.880 11:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.880 11:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.880 11:08:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:21.880 11:08:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:21.880 11:08:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:21.880 11:08:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:21.880 11:08:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:21.880 11:08:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:21.880 11:08:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.881 11:08:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.881 11:08:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:21.881 11:08:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:21.881 11:08:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:21.881 11:08:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:21.881 11:08:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:21.881 11:08:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.881 11:08:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:21.881 11:08:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:21.881 11:08:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:21.881 11:08:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:21.881 11:08:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:21.881 11:08:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:21.881 Cannot find device "nvmf_tgt_br" 00:21:21.881 11:08:50 -- nvmf/common.sh@155 -- # true 00:21:21.881 11:08:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.881 Cannot find device "nvmf_tgt_br2" 00:21:21.881 11:08:50 -- nvmf/common.sh@156 -- # true 00:21:21.881 11:08:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:22.138 11:08:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:22.138 Cannot find device "nvmf_tgt_br" 00:21:22.138 11:08:50 -- nvmf/common.sh@158 -- # true 00:21:22.138 11:08:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:22.138 Cannot find device "nvmf_tgt_br2" 00:21:22.138 11:08:50 -- nvmf/common.sh@159 -- # true 00:21:22.138 11:08:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:22.138 11:08:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:22.138 11:08:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.138 11:08:50 -- nvmf/common.sh@162 -- # true 00:21:22.138 11:08:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.138 11:08:50 -- nvmf/common.sh@163 -- # true 00:21:22.138 11:08:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:22.138 11:08:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:22.138 11:08:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:22.138 11:08:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:22.138 11:08:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:22.138 11:08:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:22.138 11:08:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:22.138 11:08:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:22.138 11:08:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:22.138 11:08:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:22.138 11:08:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:22.138 11:08:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:22.138 11:08:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:22.138 11:08:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:22.138 11:08:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:22.138 11:08:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:22.138 11:08:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:22.138 11:08:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:22.138 11:08:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:22.138 11:08:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:22.138 11:08:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:22.396 11:08:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:22.396 11:08:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:22.396 11:08:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:22.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:21:22.396 00:21:22.396 --- 10.0.0.2 ping statistics --- 00:21:22.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.396 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:22.396 11:08:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:22.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:22.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:21:22.396 00:21:22.396 --- 10.0.0.3 ping statistics --- 00:21:22.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.396 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:22.396 11:08:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:22.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:22.396 00:21:22.396 --- 10.0.0.1 ping statistics --- 00:21:22.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.396 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:22.396 11:08:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.396 11:08:50 -- nvmf/common.sh@422 -- # return 0 00:21:22.396 11:08:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:22.396 11:08:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.396 11:08:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:22.396 11:08:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:22.396 11:08:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.396 11:08:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:22.396 11:08:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:22.396 11:08:50 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:21:22.396 11:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:22.396 11:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.396 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:21:22.396 ************************************ 00:21:22.396 START TEST nvmf_host_management 00:21:22.396 ************************************ 00:21:22.396 11:08:50 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:21:22.396 11:08:50 -- target/host_management.sh@69 -- # starttarget 00:21:22.396 11:08:50 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:21:22.396 11:08:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:22.396 11:08:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:22.396 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:21:22.396 11:08:50 -- nvmf/common.sh@470 -- # nvmfpid=87831 00:21:22.396 11:08:50 -- nvmf/common.sh@471 -- # waitforlisten 87831 00:21:22.396 11:08:50 -- common/autotest_common.sh@817 -- # '[' -z 87831 ']' 00:21:22.396 11:08:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.396 11:08:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:22.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.396 11:08:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.396 11:08:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:22.396 11:08:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:22.396 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:21:22.396 [2024-04-18 11:08:50.977532] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:22.396 [2024-04-18 11:08:50.977665] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.653 [2024-04-18 11:08:51.126790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.653 [2024-04-18 11:08:51.220616] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.653 [2024-04-18 11:08:51.220671] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.653 [2024-04-18 11:08:51.220684] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.653 [2024-04-18 11:08:51.220693] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.653 [2024-04-18 11:08:51.220700] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.653 [2024-04-18 11:08:51.221112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.653 [2024-04-18 11:08:51.221382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.653 [2024-04-18 11:08:51.221477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:22.653 [2024-04-18 11:08:51.221478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.590 11:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.590 11:08:52 -- common/autotest_common.sh@850 -- # return 0 00:21:23.590 11:08:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:23.590 11:08:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.590 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 11:08:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.590 11:08:52 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.590 11:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.590 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 [2024-04-18 11:08:52.064337] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.590 11:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.590 11:08:52 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:21:23.590 11:08:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:23.590 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 11:08:52 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:21:23.590 11:08:52 -- target/host_management.sh@23 -- # cat 00:21:23.590 11:08:52 -- target/host_management.sh@30 -- # rpc_cmd 00:21:23.590 11:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.590 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 Malloc0 00:21:23.590 [2024-04-18 11:08:52.140794] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.590 11:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.590 11:08:52 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:21:23.590 11:08:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.590 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 11:08:52 -- target/host_management.sh@73 -- # perfpid=87903 00:21:23.590 11:08:52 -- target/host_management.sh@74 -- # waitforlisten 87903 /var/tmp/bdevperf.sock 00:21:23.590 11:08:52 -- common/autotest_common.sh@817 -- # '[' -z 87903 ']' 00:21:23.590 11:08:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.590 11:08:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:23.590 11:08:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.590 11:08:52 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:21:23.590 11:08:52 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:23.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.590 11:08:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:23.590 11:08:52 -- nvmf/common.sh@521 -- # config=() 00:21:23.590 11:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 11:08:52 -- nvmf/common.sh@521 -- # local subsystem config 00:21:23.590 11:08:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:23.590 11:08:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:23.590 { 00:21:23.590 "params": { 00:21:23.590 "name": "Nvme$subsystem", 00:21:23.590 "trtype": "$TEST_TRANSPORT", 00:21:23.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.590 "adrfam": "ipv4", 00:21:23.590 "trsvcid": "$NVMF_PORT", 00:21:23.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.590 "hdgst": ${hdgst:-false}, 00:21:23.590 "ddgst": ${ddgst:-false} 00:21:23.590 }, 00:21:23.590 "method": "bdev_nvme_attach_controller" 00:21:23.590 } 00:21:23.590 EOF 00:21:23.590 )") 00:21:23.590 11:08:52 -- nvmf/common.sh@543 -- # cat 00:21:23.590 11:08:52 -- nvmf/common.sh@545 -- # jq . 00:21:23.590 11:08:52 -- nvmf/common.sh@546 -- # IFS=, 00:21:23.590 11:08:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:23.590 "params": { 00:21:23.590 "name": "Nvme0", 00:21:23.590 "trtype": "tcp", 00:21:23.590 "traddr": "10.0.0.2", 00:21:23.590 "adrfam": "ipv4", 00:21:23.590 "trsvcid": "4420", 00:21:23.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.590 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:23.590 "hdgst": false, 00:21:23.590 "ddgst": false 00:21:23.590 }, 00:21:23.590 "method": "bdev_nvme_attach_controller" 00:21:23.591 }' 00:21:23.861 [2024-04-18 11:08:52.247523] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:23.861 [2024-04-18 11:08:52.248077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87903 ] 00:21:23.861 [2024-04-18 11:08:52.388137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.119 [2024-04-18 11:08:52.519198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.119 Running I/O for 10 seconds... 00:21:25.057 11:08:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:25.057 11:08:53 -- common/autotest_common.sh@850 -- # return 0 00:21:25.057 11:08:53 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:25.057 11:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.057 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:21:25.057 11:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.057 11:08:53 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:25.057 11:08:53 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:21:25.057 11:08:53 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:25.057 11:08:53 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:21:25.057 11:08:53 -- target/host_management.sh@52 -- # local ret=1 00:21:25.057 11:08:53 -- target/host_management.sh@53 -- # local i 00:21:25.057 11:08:53 -- target/host_management.sh@54 -- # (( i = 10 )) 00:21:25.057 11:08:53 -- target/host_management.sh@54 -- # (( i != 0 )) 00:21:25.057 11:08:53 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:21:25.057 11:08:53 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:21:25.057 11:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.057 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:21:25.057 11:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.057 11:08:53 -- target/host_management.sh@55 -- # read_io_count=899 00:21:25.057 11:08:53 -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:21:25.057 11:08:53 -- target/host_management.sh@59 -- # ret=0 00:21:25.057 11:08:53 -- target/host_management.sh@60 -- # break 00:21:25.057 11:08:53 -- target/host_management.sh@64 -- # return 0 00:21:25.057 11:08:53 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:25.057 11:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.057 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:21:25.057 [2024-04-18 11:08:53.438686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.438993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.057 [2024-04-18 11:08:53.439103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee3050 is same with the state(5) to be set 00:21:25.058 [2024-04-18 11:08:53.439464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.439982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.439994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.440003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.440014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.440024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.440052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.440076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.440088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.440098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.440110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.058 [2024-04-18 11:08:53.440119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.058 [2024-04-18 11:08:53.440131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.059 [2024-04-18 11:08:53.440957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.059 [2024-04-18 11:08:53.440969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fbe0 is same with the state(5) to be set 00:21:25.059 [2024-04-18 11:08:53.441083] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x104fbe0 was disconnected and freed. reset controller. 00:21:25.059 [2024-04-18 11:08:53.442329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:25.059 11:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.059 11:08:53 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:25.059 11:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.059 11:08:53 -- common/autotest_common.sh@10 -- # set +x 00:21:25.059 task offset: 122880 on job bdev=Nvme0n1 fails 00:21:25.060 00:21:25.060 Latency(us) 00:21:25.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.060 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.060 Job: Nvme0n1 ended in about 0.71 seconds with error 00:21:25.060 Verification LBA range: start 0x0 length 0x400 00:21:25.060 Nvme0n1 : 0.71 1356.60 84.79 90.44 0.00 43157.83 5302.46 38844.97 00:21:25.060 =================================================================================================================== 00:21:25.060 Total : 1356.60 84.79 90.44 0.00 43157.83 5302.46 38844.97 00:21:25.060 [2024-04-18 11:08:53.445176] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:25.060 [2024-04-18 11:08:53.445216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ecd0 (9): Bad file descriptor 00:21:25.060 11:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.060 11:08:53 -- target/host_management.sh@87 -- # sleep 1 00:21:25.060 [2024-04-18 11:08:53.453945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:25.994 11:08:54 -- target/host_management.sh@91 -- # kill -9 87903 00:21:25.994 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (87903) - No such process 00:21:25.994 11:08:54 -- target/host_management.sh@91 -- # true 00:21:25.994 11:08:54 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:21:25.994 11:08:54 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:25.994 11:08:54 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:21:25.994 11:08:54 -- nvmf/common.sh@521 -- # config=() 00:21:25.994 11:08:54 -- nvmf/common.sh@521 -- # local subsystem config 00:21:25.994 11:08:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:25.994 11:08:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:25.994 { 00:21:25.994 "params": { 00:21:25.994 "name": "Nvme$subsystem", 00:21:25.994 "trtype": "$TEST_TRANSPORT", 00:21:25.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.994 "adrfam": "ipv4", 00:21:25.994 "trsvcid": "$NVMF_PORT", 00:21:25.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.994 "hdgst": ${hdgst:-false}, 00:21:25.994 "ddgst": ${ddgst:-false} 00:21:25.994 }, 00:21:25.994 "method": "bdev_nvme_attach_controller" 00:21:25.994 } 00:21:25.994 EOF 00:21:25.994 )") 00:21:25.994 11:08:54 -- nvmf/common.sh@543 -- # cat 00:21:25.994 11:08:54 -- nvmf/common.sh@545 -- # jq . 00:21:25.994 11:08:54 -- nvmf/common.sh@546 -- # IFS=, 00:21:25.994 11:08:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:25.994 "params": { 00:21:25.994 "name": "Nvme0", 00:21:25.994 "trtype": "tcp", 00:21:25.994 "traddr": "10.0.0.2", 00:21:25.994 "adrfam": "ipv4", 00:21:25.994 "trsvcid": "4420", 00:21:25.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:25.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:25.994 "hdgst": false, 00:21:25.994 "ddgst": false 00:21:25.994 }, 00:21:25.994 "method": "bdev_nvme_attach_controller" 00:21:25.994 }' 00:21:25.994 [2024-04-18 11:08:54.509733] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:25.994 [2024-04-18 11:08:54.509846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87953 ] 00:21:26.252 [2024-04-18 11:08:54.645500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.252 [2024-04-18 11:08:54.787861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.510 Running I/O for 1 seconds... 00:21:27.444 00:21:27.444 Latency(us) 00:21:27.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.444 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.444 Verification LBA range: start 0x0 length 0x400 00:21:27.444 Nvme0n1 : 1.05 1408.01 88.00 0.00 0.00 44547.70 8162.21 39798.23 00:21:27.444 =================================================================================================================== 00:21:27.444 Total : 1408.01 88.00 0.00 0.00 44547.70 8162.21 39798.23 00:21:28.010 11:08:56 -- target/host_management.sh@102 -- # stoptarget 00:21:28.010 11:08:56 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:21:28.010 11:08:56 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:21:28.010 11:08:56 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:21:28.010 11:08:56 -- target/host_management.sh@40 -- # nvmftestfini 00:21:28.010 11:08:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:28.010 11:08:56 -- nvmf/common.sh@117 -- # sync 00:21:28.010 11:08:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.010 11:08:56 -- nvmf/common.sh@120 -- # set +e 00:21:28.010 11:08:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.010 11:08:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.010 rmmod nvme_tcp 00:21:28.010 rmmod nvme_fabrics 00:21:28.010 rmmod nvme_keyring 00:21:28.010 11:08:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.010 11:08:56 -- nvmf/common.sh@124 -- # set -e 00:21:28.010 11:08:56 -- nvmf/common.sh@125 -- # return 0 00:21:28.010 11:08:56 -- nvmf/common.sh@478 -- # '[' -n 87831 ']' 00:21:28.010 11:08:56 -- nvmf/common.sh@479 -- # killprocess 87831 00:21:28.010 11:08:56 -- common/autotest_common.sh@936 -- # '[' -z 87831 ']' 00:21:28.010 11:08:56 -- common/autotest_common.sh@940 -- # kill -0 87831 00:21:28.010 11:08:56 -- common/autotest_common.sh@941 -- # uname 00:21:28.010 11:08:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:28.010 11:08:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87831 00:21:28.010 killing process with pid 87831 00:21:28.010 11:08:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:28.010 11:08:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:28.010 11:08:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87831' 00:21:28.010 11:08:56 -- common/autotest_common.sh@955 -- # kill 87831 00:21:28.010 11:08:56 -- common/autotest_common.sh@960 -- # wait 87831 00:21:28.269 [2024-04-18 11:08:56.832019] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:21:28.269 11:08:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:28.269 11:08:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:28.269 11:08:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:28.269 11:08:56 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.269 11:08:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.269 11:08:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.269 11:08:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.269 11:08:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.269 11:08:56 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:28.269 00:21:28.269 real 0m5.993s 00:21:28.269 user 0m25.297s 00:21:28.269 sys 0m1.411s 00:21:28.269 11:08:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:28.269 ************************************ 00:21:28.269 END TEST nvmf_host_management 00:21:28.269 ************************************ 00:21:28.269 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:21:28.528 11:08:56 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:28.528 00:21:28.528 real 0m6.586s 00:21:28.528 user 0m25.433s 00:21:28.528 sys 0m1.700s 00:21:28.528 11:08:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:28.528 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:21:28.528 ************************************ 00:21:28.528 END TEST nvmf_host_management 00:21:28.528 ************************************ 00:21:28.528 11:08:56 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:28.528 11:08:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:28.528 11:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:28.528 11:08:56 -- common/autotest_common.sh@10 -- # set +x 00:21:28.528 ************************************ 00:21:28.528 START TEST nvmf_lvol 00:21:28.528 ************************************ 00:21:28.528 11:08:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:28.528 * Looking for test storage... 00:21:28.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:28.528 11:08:57 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.528 11:08:57 -- nvmf/common.sh@7 -- # uname -s 00:21:28.528 11:08:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.528 11:08:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.528 11:08:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.528 11:08:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.528 11:08:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.528 11:08:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.528 11:08:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.528 11:08:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.528 11:08:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.528 11:08:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.528 11:08:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:28.528 11:08:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:28.528 11:08:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.528 11:08:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.528 11:08:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:28.528 11:08:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.528 11:08:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.528 11:08:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.528 11:08:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.528 11:08:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.528 11:08:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.528 11:08:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.528 11:08:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.528 11:08:57 -- paths/export.sh@5 -- # export PATH 00:21:28.528 11:08:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.528 11:08:57 -- nvmf/common.sh@47 -- # : 0 00:21:28.528 11:08:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.528 11:08:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.528 11:08:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.528 11:08:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.528 11:08:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.528 11:08:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.528 11:08:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.528 11:08:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.787 11:08:57 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.787 11:08:57 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.787 11:08:57 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:21:28.787 11:08:57 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:21:28.787 11:08:57 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:28.787 11:08:57 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:21:28.787 11:08:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:28.787 11:08:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.787 11:08:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:28.787 11:08:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:28.787 11:08:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:28.787 11:08:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.787 11:08:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.787 11:08:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.787 11:08:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:28.787 11:08:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:28.787 11:08:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:28.787 11:08:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:28.787 11:08:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:28.787 11:08:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:28.787 11:08:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.787 11:08:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.787 11:08:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:28.787 11:08:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:28.787 11:08:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:28.787 11:08:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:28.787 11:08:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:28.787 11:08:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.787 11:08:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:28.787 11:08:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:28.787 11:08:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:28.787 11:08:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:28.787 11:08:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:28.787 11:08:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:28.787 Cannot find device "nvmf_tgt_br" 00:21:28.787 11:08:57 -- nvmf/common.sh@155 -- # true 00:21:28.787 11:08:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:28.787 Cannot find device "nvmf_tgt_br2" 00:21:28.787 11:08:57 -- nvmf/common.sh@156 -- # true 00:21:28.787 11:08:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:28.787 11:08:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:28.787 Cannot find device "nvmf_tgt_br" 00:21:28.787 11:08:57 -- nvmf/common.sh@158 -- # true 00:21:28.787 11:08:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:28.787 Cannot find device "nvmf_tgt_br2" 00:21:28.787 11:08:57 -- nvmf/common.sh@159 -- # true 00:21:28.787 11:08:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:28.787 11:08:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:28.787 11:08:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:28.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.787 11:08:57 -- nvmf/common.sh@162 -- # true 00:21:28.787 11:08:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:28.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.787 11:08:57 -- nvmf/common.sh@163 -- # true 00:21:28.787 11:08:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:28.787 11:08:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:28.787 11:08:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:28.787 11:08:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:28.787 11:08:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:28.787 11:08:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:28.787 11:08:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:28.787 11:08:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:28.787 11:08:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:28.787 11:08:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:28.787 11:08:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:28.787 11:08:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:29.046 11:08:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:29.046 11:08:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:29.046 11:08:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:29.046 11:08:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:29.046 11:08:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:29.046 11:08:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:29.046 11:08:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:29.046 11:08:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:29.046 11:08:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:29.046 11:08:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:29.046 11:08:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:29.046 11:08:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:29.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:21:29.046 00:21:29.046 --- 10.0.0.2 ping statistics --- 00:21:29.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.046 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:29.046 11:08:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:29.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:29.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:29.046 00:21:29.046 --- 10.0.0.3 ping statistics --- 00:21:29.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.046 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:29.046 11:08:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:29.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:21:29.046 00:21:29.046 --- 10.0.0.1 ping statistics --- 00:21:29.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.046 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:29.046 11:08:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.046 11:08:57 -- nvmf/common.sh@422 -- # return 0 00:21:29.046 11:08:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:29.046 11:08:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.046 11:08:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:29.046 11:08:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:29.046 11:08:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.046 11:08:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:29.046 11:08:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:29.046 11:08:57 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:21:29.046 11:08:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:29.046 11:08:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:29.046 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:29.046 11:08:57 -- nvmf/common.sh@470 -- # nvmfpid=88196 00:21:29.046 11:08:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:29.046 11:08:57 -- nvmf/common.sh@471 -- # waitforlisten 88196 00:21:29.046 11:08:57 -- common/autotest_common.sh@817 -- # '[' -z 88196 ']' 00:21:29.046 11:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.046 11:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:29.046 11:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.046 11:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:29.047 11:08:57 -- common/autotest_common.sh@10 -- # set +x 00:21:29.047 [2024-04-18 11:08:57.613367] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:29.047 [2024-04-18 11:08:57.613442] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.305 [2024-04-18 11:08:57.752006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:29.305 [2024-04-18 11:08:57.855561] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.305 [2024-04-18 11:08:57.855634] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.305 [2024-04-18 11:08:57.855648] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.305 [2024-04-18 11:08:57.855658] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.305 [2024-04-18 11:08:57.855668] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.305 [2024-04-18 11:08:57.855857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.305 [2024-04-18 11:08:57.856179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.305 [2024-04-18 11:08:57.856195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.241 11:08:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:30.241 11:08:58 -- common/autotest_common.sh@850 -- # return 0 00:21:30.241 11:08:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:30.241 11:08:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:30.241 11:08:58 -- common/autotest_common.sh@10 -- # set +x 00:21:30.241 11:08:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.241 11:08:58 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:30.499 [2024-04-18 11:08:58.907535] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.499 11:08:58 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:30.758 11:08:59 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:21:30.758 11:08:59 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:31.016 11:08:59 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:21:31.016 11:08:59 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:21:31.584 11:08:59 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:21:31.842 11:09:00 -- target/nvmf_lvol.sh@29 -- # lvs=8dd508eb-40a5-4060-8b36-5a4c7ec78703 00:21:31.842 11:09:00 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8dd508eb-40a5-4060-8b36-5a4c7ec78703 lvol 20 00:21:32.101 11:09:00 -- target/nvmf_lvol.sh@32 -- # lvol=ddb1d60a-fd2c-464b-a50b-12a3b887333e 00:21:32.101 11:09:00 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:32.360 11:09:00 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ddb1d60a-fd2c-464b-a50b-12a3b887333e 00:21:32.618 11:09:01 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:32.876 [2024-04-18 11:09:01.477226] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.876 11:09:01 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:33.443 11:09:01 -- target/nvmf_lvol.sh@42 -- # perf_pid=88354 00:21:33.443 11:09:01 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:21:33.443 11:09:01 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:21:34.378 11:09:02 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ddb1d60a-fd2c-464b-a50b-12a3b887333e MY_SNAPSHOT 00:21:34.649 11:09:03 -- target/nvmf_lvol.sh@47 -- # snapshot=eab97074-7726-42ea-8c3f-58bf6f9fa4a7 00:21:34.649 11:09:03 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ddb1d60a-fd2c-464b-a50b-12a3b887333e 30 00:21:34.911 11:09:03 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone eab97074-7726-42ea-8c3f-58bf6f9fa4a7 MY_CLONE 00:21:35.169 11:09:03 -- target/nvmf_lvol.sh@49 -- # clone=9b2ee8ed-57ab-4668-ac30-5e2d19c4361d 00:21:35.169 11:09:03 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9b2ee8ed-57ab-4668-ac30-5e2d19c4361d 00:21:36.106 11:09:04 -- target/nvmf_lvol.sh@53 -- # wait 88354 00:21:44.216 Initializing NVMe Controllers 00:21:44.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:44.216 Controller IO queue size 128, less than required. 00:21:44.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:21:44.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:21:44.216 Initialization complete. Launching workers. 00:21:44.216 ======================================================== 00:21:44.216 Latency(us) 00:21:44.216 Device Information : IOPS MiB/s Average min max 00:21:44.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8063.65 31.50 15885.94 2963.21 94264.60 00:21:44.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7771.65 30.36 16474.83 454.20 123001.96 00:21:44.216 ======================================================== 00:21:44.216 Total : 15835.31 61.86 16174.95 454.20 123001.96 00:21:44.216 00:21:44.216 11:09:12 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:44.216 11:09:12 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ddb1d60a-fd2c-464b-a50b-12a3b887333e 00:21:44.216 11:09:12 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8dd508eb-40a5-4060-8b36-5a4c7ec78703 00:21:44.216 11:09:12 -- target/nvmf_lvol.sh@60 -- # rm -f 00:21:44.474 11:09:12 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:21:44.474 11:09:12 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:21:44.474 11:09:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:44.474 11:09:12 -- nvmf/common.sh@117 -- # sync 00:21:44.474 11:09:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.474 11:09:12 -- nvmf/common.sh@120 -- # set +e 00:21:44.474 11:09:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.474 11:09:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.474 rmmod nvme_tcp 00:21:44.474 rmmod nvme_fabrics 00:21:44.474 rmmod nvme_keyring 00:21:44.475 11:09:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.475 11:09:12 -- nvmf/common.sh@124 -- # set -e 00:21:44.475 11:09:12 -- nvmf/common.sh@125 -- # return 0 00:21:44.475 11:09:12 -- nvmf/common.sh@478 -- # '[' -n 88196 ']' 00:21:44.475 11:09:12 -- nvmf/common.sh@479 -- # killprocess 88196 00:21:44.475 11:09:12 -- common/autotest_common.sh@936 -- # '[' -z 88196 ']' 00:21:44.475 11:09:12 -- common/autotest_common.sh@940 -- # kill -0 88196 00:21:44.475 11:09:12 -- common/autotest_common.sh@941 -- # uname 00:21:44.475 11:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:44.475 11:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88196 00:21:44.475 11:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:44.475 killing process with pid 88196 00:21:44.475 11:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:44.475 11:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88196' 00:21:44.475 11:09:12 -- common/autotest_common.sh@955 -- # kill 88196 00:21:44.475 11:09:12 -- common/autotest_common.sh@960 -- # wait 88196 00:21:44.732 11:09:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:44.732 11:09:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:44.732 11:09:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:44.732 11:09:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.732 11:09:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.732 11:09:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.732 11:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.732 11:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.732 11:09:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:44.732 ************************************ 00:21:44.732 END TEST nvmf_lvol 00:21:44.732 ************************************ 00:21:44.732 00:21:44.732 real 0m16.193s 00:21:44.732 user 1m7.430s 00:21:44.732 sys 0m3.922s 00:21:44.732 11:09:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:44.732 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:21:44.732 11:09:13 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:44.732 11:09:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:44.732 11:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.732 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:21:44.732 ************************************ 00:21:44.732 START TEST nvmf_lvs_grow 00:21:44.732 ************************************ 00:21:44.732 11:09:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:44.991 * Looking for test storage... 00:21:44.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:44.991 11:09:13 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.991 11:09:13 -- nvmf/common.sh@7 -- # uname -s 00:21:44.991 11:09:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.991 11:09:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.991 11:09:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.991 11:09:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.991 11:09:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.991 11:09:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.991 11:09:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.991 11:09:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.991 11:09:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.991 11:09:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.991 11:09:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:44.991 11:09:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:21:44.991 11:09:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.991 11:09:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.991 11:09:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.991 11:09:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.991 11:09:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.991 11:09:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.991 11:09:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.991 11:09:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.991 11:09:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.991 11:09:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.991 11:09:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.991 11:09:13 -- paths/export.sh@5 -- # export PATH 00:21:44.991 11:09:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.991 11:09:13 -- nvmf/common.sh@47 -- # : 0 00:21:44.991 11:09:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:44.991 11:09:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:44.991 11:09:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.991 11:09:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.991 11:09:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.991 11:09:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:44.991 11:09:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:44.991 11:09:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:44.991 11:09:13 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.991 11:09:13 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.991 11:09:13 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:21:44.991 11:09:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:44.991 11:09:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.991 11:09:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:44.991 11:09:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:44.991 11:09:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:44.991 11:09:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.991 11:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.991 11:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.991 11:09:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:44.991 11:09:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:44.991 11:09:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:44.991 11:09:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:44.991 11:09:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:44.991 11:09:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:44.991 11:09:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.991 11:09:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.991 11:09:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:44.991 11:09:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:44.991 11:09:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.991 11:09:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.991 11:09:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.991 11:09:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.991 11:09:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.991 11:09:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.991 11:09:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.991 11:09:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.991 11:09:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:44.991 11:09:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:44.991 Cannot find device "nvmf_tgt_br" 00:21:44.991 11:09:13 -- nvmf/common.sh@155 -- # true 00:21:44.991 11:09:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.991 Cannot find device "nvmf_tgt_br2" 00:21:44.991 11:09:13 -- nvmf/common.sh@156 -- # true 00:21:44.991 11:09:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:44.991 11:09:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:44.991 Cannot find device "nvmf_tgt_br" 00:21:44.991 11:09:13 -- nvmf/common.sh@158 -- # true 00:21:44.991 11:09:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:44.991 Cannot find device "nvmf_tgt_br2" 00:21:44.991 11:09:13 -- nvmf/common.sh@159 -- # true 00:21:44.992 11:09:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:44.992 11:09:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:44.992 11:09:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.992 11:09:13 -- nvmf/common.sh@162 -- # true 00:21:44.992 11:09:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.992 11:09:13 -- nvmf/common.sh@163 -- # true 00:21:44.992 11:09:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:45.250 11:09:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:45.250 11:09:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:45.250 11:09:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:45.250 11:09:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:45.250 11:09:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:45.250 11:09:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:45.250 11:09:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:45.250 11:09:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:45.250 11:09:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:45.250 11:09:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:45.250 11:09:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:45.250 11:09:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:45.250 11:09:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:45.250 11:09:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:45.250 11:09:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:45.250 11:09:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:45.250 11:09:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:45.250 11:09:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:45.250 11:09:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:45.250 11:09:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:45.250 11:09:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:45.250 11:09:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:45.250 11:09:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:45.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:21:45.250 00:21:45.250 --- 10.0.0.2 ping statistics --- 00:21:45.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.250 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:45.250 11:09:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:45.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:45.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:45.250 00:21:45.250 --- 10.0.0.3 ping statistics --- 00:21:45.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.250 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:45.250 11:09:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:45.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:45.250 00:21:45.250 --- 10.0.0.1 ping statistics --- 00:21:45.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.250 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:45.250 11:09:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.250 11:09:13 -- nvmf/common.sh@422 -- # return 0 00:21:45.250 11:09:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:45.250 11:09:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.250 11:09:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:45.250 11:09:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:45.250 11:09:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.250 11:09:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:45.250 11:09:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:45.250 11:09:13 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:21:45.250 11:09:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:45.250 11:09:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:45.250 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:21:45.250 11:09:13 -- nvmf/common.sh@470 -- # nvmfpid=88715 00:21:45.250 11:09:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:45.250 11:09:13 -- nvmf/common.sh@471 -- # waitforlisten 88715 00:21:45.250 11:09:13 -- common/autotest_common.sh@817 -- # '[' -z 88715 ']' 00:21:45.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.250 11:09:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.250 11:09:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:45.250 11:09:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.250 11:09:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:45.250 11:09:13 -- common/autotest_common.sh@10 -- # set +x 00:21:45.250 [2024-04-18 11:09:13.885358] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:45.250 [2024-04-18 11:09:13.885676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.509 [2024-04-18 11:09:14.025691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.509 [2024-04-18 11:09:14.118930] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.509 [2024-04-18 11:09:14.119252] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.509 [2024-04-18 11:09:14.119436] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.509 [2024-04-18 11:09:14.119561] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.509 [2024-04-18 11:09:14.119576] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.509 [2024-04-18 11:09:14.119621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.444 11:09:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:46.444 11:09:14 -- common/autotest_common.sh@850 -- # return 0 00:21:46.444 11:09:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:46.444 11:09:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:46.444 11:09:14 -- common/autotest_common.sh@10 -- # set +x 00:21:46.444 11:09:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.444 11:09:14 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:46.702 [2024-04-18 11:09:15.194209] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.702 11:09:15 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:21:46.702 11:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:46.702 11:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:46.702 11:09:15 -- common/autotest_common.sh@10 -- # set +x 00:21:46.702 ************************************ 00:21:46.702 START TEST lvs_grow_clean 00:21:46.702 ************************************ 00:21:46.702 11:09:15 -- common/autotest_common.sh@1111 -- # lvs_grow 00:21:46.702 11:09:15 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:46.702 11:09:15 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:46.702 11:09:15 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:46.702 11:09:15 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:46.703 11:09:15 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:46.703 11:09:15 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:46.703 11:09:15 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:46.703 11:09:15 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:46.703 11:09:15 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:47.269 11:09:15 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:47.269 11:09:15 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:47.269 11:09:15 -- target/nvmf_lvs_grow.sh@28 -- # lvs=61280a81-1819-4a5d-b064-6be06d52bebc 00:21:47.269 11:09:15 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:47.269 11:09:15 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:21:47.527 11:09:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:47.527 11:09:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:47.527 11:09:16 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 61280a81-1819-4a5d-b064-6be06d52bebc lvol 150 00:21:47.786 11:09:16 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ecf01866-37e0-467e-abc1-b57572422877 00:21:47.786 11:09:16 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:47.786 11:09:16 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:48.048 [2024-04-18 11:09:16.558750] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:48.048 [2024-04-18 11:09:16.558833] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:48.048 true 00:21:48.048 11:09:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:48.048 11:09:16 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:21:48.307 11:09:16 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:48.307 11:09:16 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:48.565 11:09:17 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ecf01866-37e0-467e-abc1-b57572422877 00:21:48.823 11:09:17 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.081 [2024-04-18 11:09:17.547359] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.081 11:09:17 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:49.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.347 11:09:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=88886 00:21:49.347 11:09:17 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:49.347 11:09:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.347 11:09:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 88886 /var/tmp/bdevperf.sock 00:21:49.347 11:09:17 -- common/autotest_common.sh@817 -- # '[' -z 88886 ']' 00:21:49.347 11:09:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.347 11:09:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.347 11:09:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.347 11:09:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.347 11:09:17 -- common/autotest_common.sh@10 -- # set +x 00:21:49.347 [2024-04-18 11:09:17.856563] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:49.347 [2024-04-18 11:09:17.856648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88886 ] 00:21:49.631 [2024-04-18 11:09:17.995796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.631 [2024-04-18 11:09:18.099932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.197 11:09:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:50.197 11:09:18 -- common/autotest_common.sh@850 -- # return 0 00:21:50.197 11:09:18 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:50.457 Nvme0n1 00:21:50.457 11:09:19 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:50.715 [ 00:21:50.716 { 00:21:50.716 "aliases": [ 00:21:50.716 "ecf01866-37e0-467e-abc1-b57572422877" 00:21:50.716 ], 00:21:50.716 "assigned_rate_limits": { 00:21:50.716 "r_mbytes_per_sec": 0, 00:21:50.716 "rw_ios_per_sec": 0, 00:21:50.716 "rw_mbytes_per_sec": 0, 00:21:50.716 "w_mbytes_per_sec": 0 00:21:50.716 }, 00:21:50.716 "block_size": 4096, 00:21:50.716 "claimed": false, 00:21:50.716 "driver_specific": { 00:21:50.716 "mp_policy": "active_passive", 00:21:50.716 "nvme": [ 00:21:50.716 { 00:21:50.716 "ctrlr_data": { 00:21:50.716 "ana_reporting": false, 00:21:50.716 "cntlid": 1, 00:21:50.716 "firmware_revision": "24.05", 00:21:50.716 "model_number": "SPDK bdev Controller", 00:21:50.716 "multi_ctrlr": true, 00:21:50.716 "oacs": { 00:21:50.716 "firmware": 0, 00:21:50.716 "format": 0, 00:21:50.716 "ns_manage": 0, 00:21:50.716 "security": 0 00:21:50.716 }, 00:21:50.716 "serial_number": "SPDK0", 00:21:50.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.716 "vendor_id": "0x8086" 00:21:50.716 }, 00:21:50.716 "ns_data": { 00:21:50.716 "can_share": true, 00:21:50.716 "id": 1 00:21:50.716 }, 00:21:50.716 "trid": { 00:21:50.716 "adrfam": "IPv4", 00:21:50.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.716 "traddr": "10.0.0.2", 00:21:50.716 "trsvcid": "4420", 00:21:50.716 "trtype": "TCP" 00:21:50.716 }, 00:21:50.716 "vs": { 00:21:50.716 "nvme_version": "1.3" 00:21:50.716 } 00:21:50.716 } 00:21:50.716 ] 00:21:50.716 }, 00:21:50.716 "memory_domains": [ 00:21:50.716 { 00:21:50.716 "dma_device_id": "system", 00:21:50.716 "dma_device_type": 1 00:21:50.716 } 00:21:50.716 ], 00:21:50.716 "name": "Nvme0n1", 00:21:50.716 "num_blocks": 38912, 00:21:50.716 "product_name": "NVMe disk", 00:21:50.716 "supported_io_types": { 00:21:50.716 "abort": true, 00:21:50.716 "compare": true, 00:21:50.716 "compare_and_write": true, 00:21:50.716 "flush": true, 00:21:50.716 "nvme_admin": true, 00:21:50.716 "nvme_io": true, 00:21:50.716 "read": true, 00:21:50.716 "reset": true, 00:21:50.716 "unmap": true, 00:21:50.716 "write": true, 00:21:50.716 "write_zeroes": true 00:21:50.716 }, 00:21:50.716 "uuid": "ecf01866-37e0-467e-abc1-b57572422877", 00:21:50.716 "zoned": false 00:21:50.716 } 00:21:50.716 ] 00:21:50.716 11:09:19 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=88935 00:21:50.716 11:09:19 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.716 11:09:19 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:50.974 Running I/O for 10 seconds... 00:21:51.910 Latency(us) 00:21:51.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:51.910 Nvme0n1 : 1.00 8240.00 32.19 0.00 0.00 0.00 0.00 0.00 00:21:51.910 =================================================================================================================== 00:21:51.910 Total : 8240.00 32.19 0.00 0.00 0.00 0.00 0.00 00:21:51.910 00:21:52.845 11:09:21 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:21:52.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:52.845 Nvme0n1 : 2.00 8179.50 31.95 0.00 0.00 0.00 0.00 0.00 00:21:52.845 =================================================================================================================== 00:21:52.845 Total : 8179.50 31.95 0.00 0.00 0.00 0.00 0.00 00:21:52.845 00:21:53.102 true 00:21:53.102 11:09:21 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:21:53.102 11:09:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:53.360 11:09:21 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:53.360 11:09:21 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:53.360 11:09:21 -- target/nvmf_lvs_grow.sh@65 -- # wait 88935 00:21:53.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:53.925 Nvme0n1 : 3.00 8146.00 31.82 0.00 0.00 0.00 0.00 0.00 00:21:53.925 =================================================================================================================== 00:21:53.925 Total : 8146.00 31.82 0.00 0.00 0.00 0.00 0.00 00:21:53.926 00:21:54.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:54.859 Nvme0n1 : 4.00 8046.00 31.43 0.00 0.00 0.00 0.00 0.00 00:21:54.859 =================================================================================================================== 00:21:54.859 Total : 8046.00 31.43 0.00 0.00 0.00 0.00 0.00 00:21:54.859 00:21:56.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:56.233 Nvme0n1 : 5.00 7990.20 31.21 0.00 0.00 0.00 0.00 0.00 00:21:56.233 =================================================================================================================== 00:21:56.233 Total : 7990.20 31.21 0.00 0.00 0.00 0.00 0.00 00:21:56.233 00:21:56.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:56.798 Nvme0n1 : 6.00 8007.00 31.28 0.00 0.00 0.00 0.00 0.00 00:21:56.798 =================================================================================================================== 00:21:56.798 Total : 8007.00 31.28 0.00 0.00 0.00 0.00 0.00 00:21:56.798 00:21:58.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:58.177 Nvme0n1 : 7.00 8024.86 31.35 0.00 0.00 0.00 0.00 0.00 00:21:58.177 =================================================================================================================== 00:21:58.177 Total : 8024.86 31.35 0.00 0.00 0.00 0.00 0.00 00:21:58.177 00:21:59.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:59.109 Nvme0n1 : 8.00 8031.75 31.37 0.00 0.00 0.00 0.00 0.00 00:21:59.109 =================================================================================================================== 00:21:59.109 Total : 8031.75 31.37 0.00 0.00 0.00 0.00 0.00 00:21:59.109 00:22:00.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:00.042 Nvme0n1 : 9.00 8029.11 31.36 0.00 0.00 0.00 0.00 0.00 00:22:00.042 =================================================================================================================== 00:22:00.042 Total : 8029.11 31.36 0.00 0.00 0.00 0.00 0.00 00:22:00.042 00:22:00.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:00.976 Nvme0n1 : 10.00 8006.80 31.28 0.00 0.00 0.00 0.00 0.00 00:22:00.976 =================================================================================================================== 00:22:00.976 Total : 8006.80 31.28 0.00 0.00 0.00 0.00 0.00 00:22:00.976 00:22:00.976 00:22:00.976 Latency(us) 00:22:00.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:00.976 Nvme0n1 : 10.01 8012.15 31.30 0.00 0.00 15967.24 6345.08 36938.47 00:22:00.976 =================================================================================================================== 00:22:00.976 Total : 8012.15 31.30 0.00 0.00 15967.24 6345.08 36938.47 00:22:00.976 0 00:22:00.976 11:09:29 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 88886 00:22:00.976 11:09:29 -- common/autotest_common.sh@936 -- # '[' -z 88886 ']' 00:22:00.976 11:09:29 -- common/autotest_common.sh@940 -- # kill -0 88886 00:22:00.976 11:09:29 -- common/autotest_common.sh@941 -- # uname 00:22:00.976 11:09:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.976 11:09:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88886 00:22:00.976 killing process with pid 88886 00:22:00.976 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.976 00:22:00.976 Latency(us) 00:22:00.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.976 =================================================================================================================== 00:22:00.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.976 11:09:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:00.976 11:09:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:00.976 11:09:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88886' 00:22:00.976 11:09:29 -- common/autotest_common.sh@955 -- # kill 88886 00:22:00.976 11:09:29 -- common/autotest_common.sh@960 -- # wait 88886 00:22:01.235 11:09:29 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:01.493 11:09:30 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:01.493 11:09:30 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:22:01.765 11:09:30 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:01.766 11:09:30 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:22:01.766 11:09:30 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:02.025 [2024-04-18 11:09:30.525659] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:02.025 11:09:30 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:22:02.025 11:09:30 -- common/autotest_common.sh@638 -- # local es=0 00:22:02.025 11:09:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:22:02.025 11:09:30 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:02.025 11:09:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:02.025 11:09:30 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:02.025 11:09:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:02.025 11:09:30 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:02.025 11:09:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:02.025 11:09:30 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:02.025 11:09:30 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:02.025 11:09:30 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:22:02.322 2024/04/18 11:09:30 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:61280a81-1819-4a5d-b064-6be06d52bebc], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:22:02.322 request: 00:22:02.322 { 00:22:02.322 "method": "bdev_lvol_get_lvstores", 00:22:02.322 "params": { 00:22:02.322 "uuid": "61280a81-1819-4a5d-b064-6be06d52bebc" 00:22:02.322 } 00:22:02.322 } 00:22:02.322 Got JSON-RPC error response 00:22:02.322 GoRPCClient: error on JSON-RPC call 00:22:02.322 11:09:30 -- common/autotest_common.sh@641 -- # es=1 00:22:02.322 11:09:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:02.322 11:09:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:02.322 11:09:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:02.322 11:09:30 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:02.581 aio_bdev 00:22:02.581 11:09:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ecf01866-37e0-467e-abc1-b57572422877 00:22:02.581 11:09:31 -- common/autotest_common.sh@885 -- # local bdev_name=ecf01866-37e0-467e-abc1-b57572422877 00:22:02.581 11:09:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:02.581 11:09:31 -- common/autotest_common.sh@887 -- # local i 00:22:02.581 11:09:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:02.581 11:09:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:02.581 11:09:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:02.840 11:09:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ecf01866-37e0-467e-abc1-b57572422877 -t 2000 00:22:03.099 [ 00:22:03.099 { 00:22:03.099 "aliases": [ 00:22:03.099 "lvs/lvol" 00:22:03.099 ], 00:22:03.099 "assigned_rate_limits": { 00:22:03.099 "r_mbytes_per_sec": 0, 00:22:03.099 "rw_ios_per_sec": 0, 00:22:03.099 "rw_mbytes_per_sec": 0, 00:22:03.099 "w_mbytes_per_sec": 0 00:22:03.099 }, 00:22:03.099 "block_size": 4096, 00:22:03.099 "claimed": false, 00:22:03.099 "driver_specific": { 00:22:03.099 "lvol": { 00:22:03.099 "base_bdev": "aio_bdev", 00:22:03.099 "clone": false, 00:22:03.099 "esnap_clone": false, 00:22:03.099 "lvol_store_uuid": "61280a81-1819-4a5d-b064-6be06d52bebc", 00:22:03.099 "snapshot": false, 00:22:03.099 "thin_provision": false 00:22:03.099 } 00:22:03.099 }, 00:22:03.099 "name": "ecf01866-37e0-467e-abc1-b57572422877", 00:22:03.099 "num_blocks": 38912, 00:22:03.099 "product_name": "Logical Volume", 00:22:03.099 "supported_io_types": { 00:22:03.099 "abort": false, 00:22:03.099 "compare": false, 00:22:03.099 "compare_and_write": false, 00:22:03.099 "flush": false, 00:22:03.099 "nvme_admin": false, 00:22:03.099 "nvme_io": false, 00:22:03.099 "read": true, 00:22:03.099 "reset": true, 00:22:03.099 "unmap": true, 00:22:03.099 "write": true, 00:22:03.099 "write_zeroes": true 00:22:03.099 }, 00:22:03.099 "uuid": "ecf01866-37e0-467e-abc1-b57572422877", 00:22:03.099 "zoned": false 00:22:03.099 } 00:22:03.099 ] 00:22:03.099 11:09:31 -- common/autotest_common.sh@893 -- # return 0 00:22:03.099 11:09:31 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:22:03.099 11:09:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:03.358 11:09:31 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:03.358 11:09:31 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:22:03.358 11:09:31 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:03.618 11:09:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:03.618 11:09:32 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ecf01866-37e0-467e-abc1-b57572422877 00:22:03.876 11:09:32 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61280a81-1819-4a5d-b064-6be06d52bebc 00:22:04.135 11:09:32 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:04.393 11:09:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:04.651 ************************************ 00:22:04.651 END TEST lvs_grow_clean 00:22:04.651 ************************************ 00:22:04.651 00:22:04.651 real 0m17.896s 00:22:04.651 user 0m17.140s 00:22:04.651 sys 0m2.212s 00:22:04.651 11:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:04.651 11:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:04.651 11:09:33 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:22:04.651 11:09:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:04.651 11:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:04.651 11:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:04.909 ************************************ 00:22:04.909 START TEST lvs_grow_dirty 00:22:04.909 ************************************ 00:22:04.909 11:09:33 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:04.909 11:09:33 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:05.167 11:09:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:22:05.167 11:09:33 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:22:05.425 11:09:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4dba0acf-b196-4836-a3da-628c736e713b 00:22:05.425 11:09:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:22:05.425 11:09:33 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:05.682 11:09:34 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:22:05.682 11:09:34 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:22:05.682 11:09:34 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4dba0acf-b196-4836-a3da-628c736e713b lvol 150 00:22:05.940 11:09:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=276b20c8-dfc3-4cc8-9508-4bb9a74ff718 00:22:05.940 11:09:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:05.940 11:09:34 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:22:06.197 [2024-04-18 11:09:34.660903] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:22:06.197 [2024-04-18 11:09:34.661001] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:22:06.197 true 00:22:06.197 11:09:34 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:06.197 11:09:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:22:06.455 11:09:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:22:06.455 11:09:34 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:06.713 11:09:35 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 276b20c8-dfc3-4cc8-9508-4bb9a74ff718 00:22:06.971 11:09:35 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:07.228 11:09:35 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:07.486 11:09:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89324 00:22:07.486 11:09:35 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:22:07.486 11:09:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:07.486 11:09:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89324 /var/tmp/bdevperf.sock 00:22:07.486 11:09:35 -- common/autotest_common.sh@817 -- # '[' -z 89324 ']' 00:22:07.486 11:09:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.486 11:09:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:07.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.486 11:09:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.486 11:09:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:07.486 11:09:35 -- common/autotest_common.sh@10 -- # set +x 00:22:07.486 [2024-04-18 11:09:35.984087] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:07.486 [2024-04-18 11:09:35.984180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89324 ] 00:22:07.487 [2024-04-18 11:09:36.123310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.744 [2024-04-18 11:09:36.210532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.309 11:09:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.309 11:09:36 -- common/autotest_common.sh@850 -- # return 0 00:22:08.309 11:09:36 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:22:08.875 Nvme0n1 00:22:08.875 11:09:37 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:22:08.875 [ 00:22:08.875 { 00:22:08.875 "aliases": [ 00:22:08.875 "276b20c8-dfc3-4cc8-9508-4bb9a74ff718" 00:22:08.875 ], 00:22:08.875 "assigned_rate_limits": { 00:22:08.875 "r_mbytes_per_sec": 0, 00:22:08.875 "rw_ios_per_sec": 0, 00:22:08.875 "rw_mbytes_per_sec": 0, 00:22:08.875 "w_mbytes_per_sec": 0 00:22:08.875 }, 00:22:08.875 "block_size": 4096, 00:22:08.875 "claimed": false, 00:22:08.875 "driver_specific": { 00:22:08.875 "mp_policy": "active_passive", 00:22:08.875 "nvme": [ 00:22:08.875 { 00:22:08.875 "ctrlr_data": { 00:22:08.875 "ana_reporting": false, 00:22:08.875 "cntlid": 1, 00:22:08.875 "firmware_revision": "24.05", 00:22:08.875 "model_number": "SPDK bdev Controller", 00:22:08.875 "multi_ctrlr": true, 00:22:08.875 "oacs": { 00:22:08.875 "firmware": 0, 00:22:08.875 "format": 0, 00:22:08.875 "ns_manage": 0, 00:22:08.875 "security": 0 00:22:08.875 }, 00:22:08.875 "serial_number": "SPDK0", 00:22:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.875 "vendor_id": "0x8086" 00:22:08.875 }, 00:22:08.875 "ns_data": { 00:22:08.875 "can_share": true, 00:22:08.875 "id": 1 00:22:08.875 }, 00:22:08.875 "trid": { 00:22:08.875 "adrfam": "IPv4", 00:22:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.875 "traddr": "10.0.0.2", 00:22:08.875 "trsvcid": "4420", 00:22:08.875 "trtype": "TCP" 00:22:08.875 }, 00:22:08.875 "vs": { 00:22:08.875 "nvme_version": "1.3" 00:22:08.875 } 00:22:08.875 } 00:22:08.875 ] 00:22:08.875 }, 00:22:08.875 "memory_domains": [ 00:22:08.875 { 00:22:08.875 "dma_device_id": "system", 00:22:08.875 "dma_device_type": 1 00:22:08.875 } 00:22:08.875 ], 00:22:08.875 "name": "Nvme0n1", 00:22:08.875 "num_blocks": 38912, 00:22:08.875 "product_name": "NVMe disk", 00:22:08.875 "supported_io_types": { 00:22:08.875 "abort": true, 00:22:08.875 "compare": true, 00:22:08.875 "compare_and_write": true, 00:22:08.875 "flush": true, 00:22:08.875 "nvme_admin": true, 00:22:08.875 "nvme_io": true, 00:22:08.875 "read": true, 00:22:08.875 "reset": true, 00:22:08.875 "unmap": true, 00:22:08.875 "write": true, 00:22:08.875 "write_zeroes": true 00:22:08.875 }, 00:22:08.875 "uuid": "276b20c8-dfc3-4cc8-9508-4bb9a74ff718", 00:22:08.875 "zoned": false 00:22:08.875 } 00:22:08.875 ] 00:22:09.133 11:09:37 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89373 00:22:09.133 11:09:37 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:09.133 11:09:37 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:22:09.133 Running I/O for 10 seconds... 00:22:10.066 Latency(us) 00:22:10.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:10.066 Nvme0n1 : 1.00 8386.00 32.76 0.00 0.00 0.00 0.00 0.00 00:22:10.066 =================================================================================================================== 00:22:10.066 Total : 8386.00 32.76 0.00 0.00 0.00 0.00 0.00 00:22:10.066 00:22:11.061 11:09:39 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:11.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:11.061 Nvme0n1 : 2.00 8409.00 32.85 0.00 0.00 0.00 0.00 0.00 00:22:11.061 =================================================================================================================== 00:22:11.061 Total : 8409.00 32.85 0.00 0.00 0.00 0.00 0.00 00:22:11.061 00:22:11.319 true 00:22:11.319 11:09:39 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:11.319 11:09:39 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:11.577 11:09:40 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:11.577 11:09:40 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:11.577 11:09:40 -- target/nvmf_lvs_grow.sh@65 -- # wait 89373 00:22:12.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:12.144 Nvme0n1 : 3.00 8414.33 32.87 0.00 0.00 0.00 0.00 0.00 00:22:12.144 =================================================================================================================== 00:22:12.144 Total : 8414.33 32.87 0.00 0.00 0.00 0.00 0.00 00:22:12.144 00:22:13.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:13.079 Nvme0n1 : 4.00 8404.75 32.83 0.00 0.00 0.00 0.00 0.00 00:22:13.079 =================================================================================================================== 00:22:13.079 Total : 8404.75 32.83 0.00 0.00 0.00 0.00 0.00 00:22:13.079 00:22:14.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:14.013 Nvme0n1 : 5.00 8192.40 32.00 0.00 0.00 0.00 0.00 0.00 00:22:14.013 =================================================================================================================== 00:22:14.013 Total : 8192.40 32.00 0.00 0.00 0.00 0.00 0.00 00:22:14.013 00:22:15.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:15.385 Nvme0n1 : 6.00 8155.33 31.86 0.00 0.00 0.00 0.00 0.00 00:22:15.385 =================================================================================================================== 00:22:15.385 Total : 8155.33 31.86 0.00 0.00 0.00 0.00 0.00 00:22:15.385 00:22:16.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:16.319 Nvme0n1 : 7.00 8021.71 31.33 0.00 0.00 0.00 0.00 0.00 00:22:16.319 =================================================================================================================== 00:22:16.319 Total : 8021.71 31.33 0.00 0.00 0.00 0.00 0.00 00:22:16.319 00:22:17.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:17.253 Nvme0n1 : 8.00 7702.88 30.09 0.00 0.00 0.00 0.00 0.00 00:22:17.253 =================================================================================================================== 00:22:17.253 Total : 7702.88 30.09 0.00 0.00 0.00 0.00 0.00 00:22:17.253 00:22:18.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:18.186 Nvme0n1 : 9.00 7689.00 30.04 0.00 0.00 0.00 0.00 0.00 00:22:18.186 =================================================================================================================== 00:22:18.186 Total : 7689.00 30.04 0.00 0.00 0.00 0.00 0.00 00:22:18.186 00:22:19.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:19.121 Nvme0n1 : 10.00 7689.10 30.04 0.00 0.00 0.00 0.00 0.00 00:22:19.121 =================================================================================================================== 00:22:19.122 Total : 7689.10 30.04 0.00 0.00 0.00 0.00 0.00 00:22:19.122 00:22:19.122 00:22:19.122 Latency(us) 00:22:19.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:19.122 Nvme0n1 : 10.01 7692.18 30.05 0.00 0.00 16635.75 7626.01 367954.85 00:22:19.122 =================================================================================================================== 00:22:19.122 Total : 7692.18 30.05 0.00 0.00 16635.75 7626.01 367954.85 00:22:19.122 0 00:22:19.122 11:09:47 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89324 00:22:19.122 11:09:47 -- common/autotest_common.sh@936 -- # '[' -z 89324 ']' 00:22:19.122 11:09:47 -- common/autotest_common.sh@940 -- # kill -0 89324 00:22:19.122 11:09:47 -- common/autotest_common.sh@941 -- # uname 00:22:19.122 11:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.122 11:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89324 00:22:19.122 killing process with pid 89324 00:22:19.122 Received shutdown signal, test time was about 10.000000 seconds 00:22:19.122 00:22:19.122 Latency(us) 00:22:19.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.122 =================================================================================================================== 00:22:19.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.122 11:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:19.122 11:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:19.122 11:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89324' 00:22:19.122 11:09:47 -- common/autotest_common.sh@955 -- # kill 89324 00:22:19.122 11:09:47 -- common/autotest_common.sh@960 -- # wait 89324 00:22:19.380 11:09:47 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:19.637 11:09:48 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:19.637 11:09:48 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:19.895 11:09:48 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:19.895 11:09:48 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:22:19.895 11:09:48 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 88715 00:22:19.895 11:09:48 -- target/nvmf_lvs_grow.sh@74 -- # wait 88715 00:22:19.895 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 88715 Killed "${NVMF_APP[@]}" "$@" 00:22:19.895 11:09:48 -- target/nvmf_lvs_grow.sh@74 -- # true 00:22:19.895 11:09:48 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:22:19.895 11:09:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:19.895 11:09:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:19.895 11:09:48 -- common/autotest_common.sh@10 -- # set +x 00:22:19.895 11:09:48 -- nvmf/common.sh@470 -- # nvmfpid=89523 00:22:19.895 11:09:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:19.895 11:09:48 -- nvmf/common.sh@471 -- # waitforlisten 89523 00:22:19.895 11:09:48 -- common/autotest_common.sh@817 -- # '[' -z 89523 ']' 00:22:19.895 11:09:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.895 11:09:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:19.895 11:09:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.895 11:09:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:19.895 11:09:48 -- common/autotest_common.sh@10 -- # set +x 00:22:20.161 [2024-04-18 11:09:48.551362] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:20.161 [2024-04-18 11:09:48.551483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.161 [2024-04-18 11:09:48.701398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.161 [2024-04-18 11:09:48.785444] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.161 [2024-04-18 11:09:48.785530] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.161 [2024-04-18 11:09:48.785542] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.161 [2024-04-18 11:09:48.785551] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.161 [2024-04-18 11:09:48.785559] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.161 [2024-04-18 11:09:48.785595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.097 11:09:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:21.097 11:09:49 -- common/autotest_common.sh@850 -- # return 0 00:22:21.097 11:09:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:21.097 11:09:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:21.097 11:09:49 -- common/autotest_common.sh@10 -- # set +x 00:22:21.097 11:09:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.097 11:09:49 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:21.356 [2024-04-18 11:09:49.790469] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:21.356 [2024-04-18 11:09:49.791731] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:21.356 [2024-04-18 11:09:49.792175] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:21.356 11:09:49 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:22:21.356 11:09:49 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 276b20c8-dfc3-4cc8-9508-4bb9a74ff718 00:22:21.356 11:09:49 -- common/autotest_common.sh@885 -- # local bdev_name=276b20c8-dfc3-4cc8-9508-4bb9a74ff718 00:22:21.356 11:09:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:21.356 11:09:49 -- common/autotest_common.sh@887 -- # local i 00:22:21.356 11:09:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:21.356 11:09:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:21.356 11:09:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:21.615 11:09:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 276b20c8-dfc3-4cc8-9508-4bb9a74ff718 -t 2000 00:22:21.873 [ 00:22:21.873 { 00:22:21.873 "aliases": [ 00:22:21.873 "lvs/lvol" 00:22:21.873 ], 00:22:21.873 "assigned_rate_limits": { 00:22:21.873 "r_mbytes_per_sec": 0, 00:22:21.873 "rw_ios_per_sec": 0, 00:22:21.873 "rw_mbytes_per_sec": 0, 00:22:21.873 "w_mbytes_per_sec": 0 00:22:21.873 }, 00:22:21.873 "block_size": 4096, 00:22:21.873 "claimed": false, 00:22:21.873 "driver_specific": { 00:22:21.873 "lvol": { 00:22:21.873 "base_bdev": "aio_bdev", 00:22:21.873 "clone": false, 00:22:21.873 "esnap_clone": false, 00:22:21.873 "lvol_store_uuid": "4dba0acf-b196-4836-a3da-628c736e713b", 00:22:21.873 "snapshot": false, 00:22:21.873 "thin_provision": false 00:22:21.873 } 00:22:21.873 }, 00:22:21.873 "name": "276b20c8-dfc3-4cc8-9508-4bb9a74ff718", 00:22:21.873 "num_blocks": 38912, 00:22:21.873 "product_name": "Logical Volume", 00:22:21.873 "supported_io_types": { 00:22:21.873 "abort": false, 00:22:21.873 "compare": false, 00:22:21.873 "compare_and_write": false, 00:22:21.873 "flush": false, 00:22:21.873 "nvme_admin": false, 00:22:21.873 "nvme_io": false, 00:22:21.873 "read": true, 00:22:21.873 "reset": true, 00:22:21.873 "unmap": true, 00:22:21.873 "write": true, 00:22:21.873 "write_zeroes": true 00:22:21.873 }, 00:22:21.873 "uuid": "276b20c8-dfc3-4cc8-9508-4bb9a74ff718", 00:22:21.873 "zoned": false 00:22:21.873 } 00:22:21.873 ] 00:22:21.873 11:09:50 -- common/autotest_common.sh@893 -- # return 0 00:22:21.873 11:09:50 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:21.873 11:09:50 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:22:22.132 11:09:50 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:22:22.132 11:09:50 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:22:22.132 11:09:50 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:22.391 11:09:50 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:22:22.391 11:09:50 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:22.648 [2024-04-18 11:09:51.059843] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:22.648 11:09:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:22.648 11:09:51 -- common/autotest_common.sh@638 -- # local es=0 00:22:22.648 11:09:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:22.648 11:09:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.648 11:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:22.648 11:09:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.648 11:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:22.648 11:09:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.648 11:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:22.649 11:09:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.649 11:09:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:22.649 11:09:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:22.907 2024/04/18 11:09:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4dba0acf-b196-4836-a3da-628c736e713b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:22:22.907 request: 00:22:22.907 { 00:22:22.907 "method": "bdev_lvol_get_lvstores", 00:22:22.907 "params": { 00:22:22.907 "uuid": "4dba0acf-b196-4836-a3da-628c736e713b" 00:22:22.907 } 00:22:22.907 } 00:22:22.907 Got JSON-RPC error response 00:22:22.907 GoRPCClient: error on JSON-RPC call 00:22:22.907 11:09:51 -- common/autotest_common.sh@641 -- # es=1 00:22:22.907 11:09:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:22.907 11:09:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:22.907 11:09:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:22.907 11:09:51 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:23.165 aio_bdev 00:22:23.166 11:09:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 276b20c8-dfc3-4cc8-9508-4bb9a74ff718 00:22:23.166 11:09:51 -- common/autotest_common.sh@885 -- # local bdev_name=276b20c8-dfc3-4cc8-9508-4bb9a74ff718 00:22:23.166 11:09:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:23.166 11:09:51 -- common/autotest_common.sh@887 -- # local i 00:22:23.166 11:09:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:23.166 11:09:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:23.166 11:09:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:23.424 11:09:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 276b20c8-dfc3-4cc8-9508-4bb9a74ff718 -t 2000 00:22:23.682 [ 00:22:23.682 { 00:22:23.682 "aliases": [ 00:22:23.682 "lvs/lvol" 00:22:23.682 ], 00:22:23.682 "assigned_rate_limits": { 00:22:23.682 "r_mbytes_per_sec": 0, 00:22:23.682 "rw_ios_per_sec": 0, 00:22:23.682 "rw_mbytes_per_sec": 0, 00:22:23.683 "w_mbytes_per_sec": 0 00:22:23.683 }, 00:22:23.683 "block_size": 4096, 00:22:23.683 "claimed": false, 00:22:23.683 "driver_specific": { 00:22:23.683 "lvol": { 00:22:23.683 "base_bdev": "aio_bdev", 00:22:23.683 "clone": false, 00:22:23.683 "esnap_clone": false, 00:22:23.683 "lvol_store_uuid": "4dba0acf-b196-4836-a3da-628c736e713b", 00:22:23.683 "snapshot": false, 00:22:23.683 "thin_provision": false 00:22:23.683 } 00:22:23.683 }, 00:22:23.683 "name": "276b20c8-dfc3-4cc8-9508-4bb9a74ff718", 00:22:23.683 "num_blocks": 38912, 00:22:23.683 "product_name": "Logical Volume", 00:22:23.683 "supported_io_types": { 00:22:23.683 "abort": false, 00:22:23.683 "compare": false, 00:22:23.683 "compare_and_write": false, 00:22:23.683 "flush": false, 00:22:23.683 "nvme_admin": false, 00:22:23.683 "nvme_io": false, 00:22:23.683 "read": true, 00:22:23.683 "reset": true, 00:22:23.683 "unmap": true, 00:22:23.683 "write": true, 00:22:23.683 "write_zeroes": true 00:22:23.683 }, 00:22:23.683 "uuid": "276b20c8-dfc3-4cc8-9508-4bb9a74ff718", 00:22:23.683 "zoned": false 00:22:23.683 } 00:22:23.683 ] 00:22:23.683 11:09:52 -- common/autotest_common.sh@893 -- # return 0 00:22:23.683 11:09:52 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:23.683 11:09:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:23.941 11:09:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:23.941 11:09:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:23.941 11:09:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:24.242 11:09:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:24.242 11:09:52 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 276b20c8-dfc3-4cc8-9508-4bb9a74ff718 00:22:24.507 11:09:52 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4dba0acf-b196-4836-a3da-628c736e713b 00:22:24.764 11:09:53 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:25.021 11:09:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:25.278 ************************************ 00:22:25.278 END TEST lvs_grow_dirty 00:22:25.278 ************************************ 00:22:25.278 00:22:25.278 real 0m20.506s 00:22:25.278 user 0m42.766s 00:22:25.278 sys 0m7.816s 00:22:25.278 11:09:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:25.278 11:09:53 -- common/autotest_common.sh@10 -- # set +x 00:22:25.278 11:09:53 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:22:25.278 11:09:53 -- common/autotest_common.sh@794 -- # type=--id 00:22:25.278 11:09:53 -- common/autotest_common.sh@795 -- # id=0 00:22:25.278 11:09:53 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:25.278 11:09:53 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:25.278 11:09:53 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:25.278 11:09:53 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:25.278 11:09:53 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:25.278 11:09:53 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:25.278 nvmf_trace.0 00:22:25.278 11:09:53 -- common/autotest_common.sh@809 -- # return 0 00:22:25.278 11:09:53 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:22:25.278 11:09:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:25.278 11:09:53 -- nvmf/common.sh@117 -- # sync 00:22:25.535 11:09:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.535 11:09:54 -- nvmf/common.sh@120 -- # set +e 00:22:25.535 11:09:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.535 11:09:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.535 rmmod nvme_tcp 00:22:25.535 rmmod nvme_fabrics 00:22:25.535 rmmod nvme_keyring 00:22:25.535 11:09:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.535 11:09:54 -- nvmf/common.sh@124 -- # set -e 00:22:25.535 11:09:54 -- nvmf/common.sh@125 -- # return 0 00:22:25.535 11:09:54 -- nvmf/common.sh@478 -- # '[' -n 89523 ']' 00:22:25.535 11:09:54 -- nvmf/common.sh@479 -- # killprocess 89523 00:22:25.535 11:09:54 -- common/autotest_common.sh@936 -- # '[' -z 89523 ']' 00:22:25.535 11:09:54 -- common/autotest_common.sh@940 -- # kill -0 89523 00:22:25.535 11:09:54 -- common/autotest_common.sh@941 -- # uname 00:22:25.535 11:09:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.535 11:09:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89523 00:22:25.535 11:09:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:25.535 11:09:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:25.535 killing process with pid 89523 00:22:25.535 11:09:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89523' 00:22:25.535 11:09:54 -- common/autotest_common.sh@955 -- # kill 89523 00:22:25.535 11:09:54 -- common/autotest_common.sh@960 -- # wait 89523 00:22:25.794 11:09:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:25.794 11:09:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:25.794 11:09:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:25.795 11:09:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.795 11:09:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.795 11:09:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.795 11:09:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.795 11:09:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.795 11:09:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:25.795 00:22:25.795 real 0m41.046s 00:22:25.795 user 1m6.402s 00:22:25.795 sys 0m10.904s 00:22:25.795 11:09:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:25.795 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:22:25.795 ************************************ 00:22:25.795 END TEST nvmf_lvs_grow 00:22:25.795 ************************************ 00:22:26.053 11:09:54 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:26.053 11:09:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:26.053 11:09:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:26.053 11:09:54 -- common/autotest_common.sh@10 -- # set +x 00:22:26.053 ************************************ 00:22:26.053 START TEST nvmf_bdev_io_wait 00:22:26.053 ************************************ 00:22:26.054 11:09:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:26.054 * Looking for test storage... 00:22:26.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:26.054 11:09:54 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.054 11:09:54 -- nvmf/common.sh@7 -- # uname -s 00:22:26.054 11:09:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.054 11:09:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.054 11:09:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.054 11:09:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.054 11:09:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.054 11:09:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.054 11:09:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.054 11:09:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.054 11:09:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.054 11:09:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.054 11:09:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:22:26.054 11:09:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:22:26.054 11:09:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.054 11:09:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.054 11:09:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.054 11:09:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.054 11:09:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.054 11:09:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.054 11:09:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.054 11:09:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.054 11:09:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.054 11:09:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.054 11:09:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.054 11:09:54 -- paths/export.sh@5 -- # export PATH 00:22:26.054 11:09:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.054 11:09:54 -- nvmf/common.sh@47 -- # : 0 00:22:26.054 11:09:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.054 11:09:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.054 11:09:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.054 11:09:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.054 11:09:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.054 11:09:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.054 11:09:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.054 11:09:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.054 11:09:54 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.054 11:09:54 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.054 11:09:54 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:22:26.054 11:09:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:26.054 11:09:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.054 11:09:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:26.054 11:09:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:26.054 11:09:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:26.054 11:09:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.054 11:09:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.054 11:09:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.054 11:09:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:26.054 11:09:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:26.054 11:09:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:26.054 11:09:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:26.054 11:09:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:26.054 11:09:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:26.054 11:09:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.054 11:09:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.054 11:09:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:26.054 11:09:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:26.054 11:09:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.054 11:09:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.054 11:09:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.054 11:09:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.054 11:09:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.054 11:09:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.054 11:09:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.054 11:09:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.054 11:09:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:26.054 11:09:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:26.054 Cannot find device "nvmf_tgt_br" 00:22:26.054 11:09:54 -- nvmf/common.sh@155 -- # true 00:22:26.054 11:09:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.054 Cannot find device "nvmf_tgt_br2" 00:22:26.054 11:09:54 -- nvmf/common.sh@156 -- # true 00:22:26.054 11:09:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:26.054 11:09:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:26.312 Cannot find device "nvmf_tgt_br" 00:22:26.312 11:09:54 -- nvmf/common.sh@158 -- # true 00:22:26.312 11:09:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:26.312 Cannot find device "nvmf_tgt_br2" 00:22:26.312 11:09:54 -- nvmf/common.sh@159 -- # true 00:22:26.312 11:09:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:26.312 11:09:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:26.312 11:09:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.312 11:09:54 -- nvmf/common.sh@162 -- # true 00:22:26.312 11:09:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.312 11:09:54 -- nvmf/common.sh@163 -- # true 00:22:26.312 11:09:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.312 11:09:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.312 11:09:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.312 11:09:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.312 11:09:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.312 11:09:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.312 11:09:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.312 11:09:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:26.312 11:09:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:26.312 11:09:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:26.312 11:09:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:26.312 11:09:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:26.312 11:09:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:26.312 11:09:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.312 11:09:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.312 11:09:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.312 11:09:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:26.312 11:09:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:26.312 11:09:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.312 11:09:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.570 11:09:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.570 11:09:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.570 11:09:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.570 11:09:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:26.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:22:26.570 00:22:26.570 --- 10.0.0.2 ping statistics --- 00:22:26.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.570 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:26.570 11:09:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:26.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:22:26.570 00:22:26.570 --- 10.0.0.3 ping statistics --- 00:22:26.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.570 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:26.570 11:09:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:22:26.570 00:22:26.570 --- 10.0.0.1 ping statistics --- 00:22:26.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.570 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:26.570 11:09:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.570 11:09:54 -- nvmf/common.sh@422 -- # return 0 00:22:26.570 11:09:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:26.570 11:09:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.570 11:09:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:26.570 11:09:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:26.570 11:09:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.570 11:09:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:26.570 11:09:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:26.570 11:09:55 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:26.570 11:09:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:26.570 11:09:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:26.570 11:09:55 -- common/autotest_common.sh@10 -- # set +x 00:22:26.570 11:09:55 -- nvmf/common.sh@470 -- # nvmfpid=89948 00:22:26.570 11:09:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:26.570 11:09:55 -- nvmf/common.sh@471 -- # waitforlisten 89948 00:22:26.570 11:09:55 -- common/autotest_common.sh@817 -- # '[' -z 89948 ']' 00:22:26.570 11:09:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.570 11:09:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:26.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.570 11:09:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.570 11:09:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:26.570 11:09:55 -- common/autotest_common.sh@10 -- # set +x 00:22:26.570 [2024-04-18 11:09:55.078865] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:26.570 [2024-04-18 11:09:55.078968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.828 [2024-04-18 11:09:55.220140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.828 [2024-04-18 11:09:55.326053] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.828 [2024-04-18 11:09:55.326111] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.828 [2024-04-18 11:09:55.326124] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.828 [2024-04-18 11:09:55.326135] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.828 [2024-04-18 11:09:55.326145] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.828 [2024-04-18 11:09:55.326274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.828 [2024-04-18 11:09:55.326561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.828 [2024-04-18 11:09:55.327024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.828 [2024-04-18 11:09:55.327061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.761 11:09:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:27.761 11:09:56 -- common/autotest_common.sh@850 -- # return 0 00:22:27.761 11:09:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:27.761 11:09:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 11:09:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:22:27.761 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:22:27.761 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.761 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 [2024-04-18 11:09:56.220828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.761 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:27.761 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 Malloc0 00:22:27.761 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.761 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:27.761 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.761 11:09:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.761 11:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.761 [2024-04-18 11:09:56.283561] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.761 11:09:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=90001 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@30 -- # READ_PID=90003 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # config=() 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # local subsystem config 00:22:27.761 11:09:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=90005 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:22:27.761 11:09:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.761 { 00:22:27.761 "params": { 00:22:27.761 "name": "Nvme$subsystem", 00:22:27.761 "trtype": "$TEST_TRANSPORT", 00:22:27.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.761 "adrfam": "ipv4", 00:22:27.761 "trsvcid": "$NVMF_PORT", 00:22:27.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.761 "hdgst": ${hdgst:-false}, 00:22:27.761 "ddgst": ${ddgst:-false} 00:22:27.761 }, 00:22:27.761 "method": "bdev_nvme_attach_controller" 00:22:27.761 } 00:22:27.761 EOF 00:22:27.761 )") 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # config=() 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # local subsystem config 00:22:27.761 11:09:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.761 11:09:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.761 { 00:22:27.761 "params": { 00:22:27.761 "name": "Nvme$subsystem", 00:22:27.761 "trtype": "$TEST_TRANSPORT", 00:22:27.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.761 "adrfam": "ipv4", 00:22:27.761 "trsvcid": "$NVMF_PORT", 00:22:27.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.761 "hdgst": ${hdgst:-false}, 00:22:27.761 "ddgst": ${ddgst:-false} 00:22:27.761 }, 00:22:27.761 "method": "bdev_nvme_attach_controller" 00:22:27.761 } 00:22:27.761 EOF 00:22:27.761 )") 00:22:27.761 11:09:56 -- nvmf/common.sh@543 -- # cat 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # config=() 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # local subsystem config 00:22:27.761 11:09:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.761 11:09:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.761 { 00:22:27.761 "params": { 00:22:27.761 "name": "Nvme$subsystem", 00:22:27.761 "trtype": "$TEST_TRANSPORT", 00:22:27.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.761 "adrfam": "ipv4", 00:22:27.761 "trsvcid": "$NVMF_PORT", 00:22:27.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.761 "hdgst": ${hdgst:-false}, 00:22:27.761 "ddgst": ${ddgst:-false} 00:22:27.761 }, 00:22:27.761 "method": "bdev_nvme_attach_controller" 00:22:27.761 } 00:22:27.761 EOF 00:22:27.761 )") 00:22:27.761 11:09:56 -- nvmf/common.sh@543 -- # cat 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=90007 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@35 -- # sync 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:22:27.761 11:09:56 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # config=() 00:22:27.761 11:09:56 -- nvmf/common.sh@521 -- # local subsystem config 00:22:27.761 11:09:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.761 11:09:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.762 { 00:22:27.762 "params": { 00:22:27.762 "name": "Nvme$subsystem", 00:22:27.762 "trtype": "$TEST_TRANSPORT", 00:22:27.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.762 "adrfam": "ipv4", 00:22:27.762 "trsvcid": "$NVMF_PORT", 00:22:27.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.762 "hdgst": ${hdgst:-false}, 00:22:27.762 "ddgst": ${ddgst:-false} 00:22:27.762 }, 00:22:27.762 "method": "bdev_nvme_attach_controller" 00:22:27.762 } 00:22:27.762 EOF 00:22:27.762 )") 00:22:27.762 11:09:56 -- nvmf/common.sh@543 -- # cat 00:22:27.762 11:09:56 -- nvmf/common.sh@545 -- # jq . 00:22:27.762 11:09:56 -- nvmf/common.sh@545 -- # jq . 00:22:27.762 11:09:56 -- nvmf/common.sh@546 -- # IFS=, 00:22:27.762 11:09:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:27.762 "params": { 00:22:27.762 "name": "Nvme1", 00:22:27.762 "trtype": "tcp", 00:22:27.762 "traddr": "10.0.0.2", 00:22:27.762 "adrfam": "ipv4", 00:22:27.762 "trsvcid": "4420", 00:22:27.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.762 "hdgst": false, 00:22:27.762 "ddgst": false 00:22:27.762 }, 00:22:27.762 "method": "bdev_nvme_attach_controller" 00:22:27.762 }' 00:22:27.762 11:09:56 -- nvmf/common.sh@546 -- # IFS=, 00:22:27.762 11:09:56 -- nvmf/common.sh@543 -- # cat 00:22:27.762 11:09:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:27.762 "params": { 00:22:27.762 "name": "Nvme1", 00:22:27.762 "trtype": "tcp", 00:22:27.762 "traddr": "10.0.0.2", 00:22:27.762 "adrfam": "ipv4", 00:22:27.762 "trsvcid": "4420", 00:22:27.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.762 "hdgst": false, 00:22:27.762 "ddgst": false 00:22:27.762 }, 00:22:27.762 "method": "bdev_nvme_attach_controller" 00:22:27.762 }' 00:22:27.762 11:09:56 -- nvmf/common.sh@545 -- # jq . 00:22:27.762 11:09:56 -- nvmf/common.sh@546 -- # IFS=, 00:22:27.762 11:09:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:27.762 "params": { 00:22:27.762 "name": "Nvme1", 00:22:27.762 "trtype": "tcp", 00:22:27.762 "traddr": "10.0.0.2", 00:22:27.762 "adrfam": "ipv4", 00:22:27.762 "trsvcid": "4420", 00:22:27.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.762 "hdgst": false, 00:22:27.762 "ddgst": false 00:22:27.762 }, 00:22:27.762 "method": "bdev_nvme_attach_controller" 00:22:27.762 }' 00:22:27.762 11:09:56 -- nvmf/common.sh@545 -- # jq . 00:22:27.762 11:09:56 -- nvmf/common.sh@546 -- # IFS=, 00:22:27.762 11:09:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:27.762 "params": { 00:22:27.762 "name": "Nvme1", 00:22:27.762 "trtype": "tcp", 00:22:27.762 "traddr": "10.0.0.2", 00:22:27.762 "adrfam": "ipv4", 00:22:27.762 "trsvcid": "4420", 00:22:27.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.762 "hdgst": false, 00:22:27.762 "ddgst": false 00:22:27.762 }, 00:22:27.762 "method": "bdev_nvme_attach_controller" 00:22:27.762 }' 00:22:27.762 [2024-04-18 11:09:56.339248] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:27.762 [2024-04-18 11:09:56.339319] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:22:27.762 [2024-04-18 11:09:56.350289] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:27.762 11:09:56 -- target/bdev_io_wait.sh@37 -- # wait 90001 00:22:27.762 [2024-04-18 11:09:56.350927] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:22:27.762 [2024-04-18 11:09:56.375632] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:27.762 [2024-04-18 11:09:56.375746] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:27.762 [2024-04-18 11:09:56.381183] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:27.762 [2024-04-18 11:09:56.381292] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:22:28.019 [2024-04-18 11:09:56.547310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.019 [2024-04-18 11:09:56.622350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.019 [2024-04-18 11:09:56.623292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:28.312 [2024-04-18 11:09:56.695102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.312 [2024-04-18 11:09:56.700081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:28.312 [2024-04-18 11:09:56.770076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:28.312 [2024-04-18 11:09:56.772068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.312 Running I/O for 1 seconds... 00:22:28.312 [2024-04-18 11:09:56.844660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:28.312 Running I/O for 1 seconds... 00:22:28.312 Running I/O for 1 seconds... 00:22:28.570 Running I/O for 1 seconds... 00:22:29.503 00:22:29.503 Latency(us) 00:22:29.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.503 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:22:29.503 Nvme1n1 : 1.02 6731.34 26.29 0.00 0.00 18726.50 7506.85 29908.25 00:22:29.503 =================================================================================================================== 00:22:29.503 Total : 6731.34 26.29 0.00 0.00 18726.50 7506.85 29908.25 00:22:29.503 00:22:29.503 Latency(us) 00:22:29.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.503 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:22:29.503 Nvme1n1 : 1.00 191430.00 747.77 0.00 0.00 666.03 255.07 1117.09 00:22:29.503 =================================================================================================================== 00:22:29.503 Total : 191430.00 747.77 0.00 0.00 666.03 255.07 1117.09 00:22:29.503 00:22:29.503 Latency(us) 00:22:29.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.503 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:22:29.503 Nvme1n1 : 1.01 8816.79 34.44 0.00 0.00 14448.77 7685.59 25618.62 00:22:29.503 =================================================================================================================== 00:22:29.503 Total : 8816.79 34.44 0.00 0.00 14448.77 7685.59 25618.62 00:22:29.503 00:22:29.503 Latency(us) 00:22:29.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.503 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:22:29.503 Nvme1n1 : 1.01 6442.56 25.17 0.00 0.00 19788.07 7328.12 46470.98 00:22:29.503 =================================================================================================================== 00:22:29.503 Total : 6442.56 25.17 0.00 0.00 19788.07 7328.12 46470.98 00:22:29.761 11:09:58 -- target/bdev_io_wait.sh@38 -- # wait 90003 00:22:29.761 11:09:58 -- target/bdev_io_wait.sh@39 -- # wait 90005 00:22:29.761 11:09:58 -- target/bdev_io_wait.sh@40 -- # wait 90007 00:22:29.761 11:09:58 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.761 11:09:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.761 11:09:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.761 11:09:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.761 11:09:58 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:22:29.761 11:09:58 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:22:29.761 11:09:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:29.761 11:09:58 -- nvmf/common.sh@117 -- # sync 00:22:29.761 11:09:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.761 11:09:58 -- nvmf/common.sh@120 -- # set +e 00:22:29.761 11:09:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.761 11:09:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.761 rmmod nvme_tcp 00:22:29.761 rmmod nvme_fabrics 00:22:29.761 rmmod nvme_keyring 00:22:29.761 11:09:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.761 11:09:58 -- nvmf/common.sh@124 -- # set -e 00:22:29.761 11:09:58 -- nvmf/common.sh@125 -- # return 0 00:22:29.762 11:09:58 -- nvmf/common.sh@478 -- # '[' -n 89948 ']' 00:22:29.762 11:09:58 -- nvmf/common.sh@479 -- # killprocess 89948 00:22:29.762 11:09:58 -- common/autotest_common.sh@936 -- # '[' -z 89948 ']' 00:22:29.762 11:09:58 -- common/autotest_common.sh@940 -- # kill -0 89948 00:22:29.762 11:09:58 -- common/autotest_common.sh@941 -- # uname 00:22:29.762 11:09:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:29.762 11:09:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89948 00:22:29.762 11:09:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:29.762 11:09:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:29.762 killing process with pid 89948 00:22:29.762 11:09:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89948' 00:22:29.762 11:09:58 -- common/autotest_common.sh@955 -- # kill 89948 00:22:29.762 11:09:58 -- common/autotest_common.sh@960 -- # wait 89948 00:22:30.019 11:09:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:30.020 11:09:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:30.020 11:09:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:30.020 11:09:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.020 11:09:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.020 11:09:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.020 11:09:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.020 11:09:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.020 11:09:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:30.020 00:22:30.020 real 0m4.091s 00:22:30.020 user 0m17.970s 00:22:30.020 sys 0m2.048s 00:22:30.020 11:09:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:30.020 11:09:58 -- common/autotest_common.sh@10 -- # set +x 00:22:30.020 ************************************ 00:22:30.020 END TEST nvmf_bdev_io_wait 00:22:30.020 ************************************ 00:22:30.278 11:09:58 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:30.278 11:09:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:30.278 11:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:30.278 11:09:58 -- common/autotest_common.sh@10 -- # set +x 00:22:30.278 ************************************ 00:22:30.278 START TEST nvmf_queue_depth 00:22:30.278 ************************************ 00:22:30.278 11:09:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:30.278 * Looking for test storage... 00:22:30.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:30.278 11:09:58 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:30.278 11:09:58 -- nvmf/common.sh@7 -- # uname -s 00:22:30.278 11:09:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.278 11:09:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.278 11:09:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.278 11:09:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.278 11:09:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.278 11:09:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.278 11:09:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.278 11:09:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.278 11:09:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.278 11:09:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.278 11:09:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:22:30.278 11:09:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:22:30.278 11:09:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.278 11:09:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.278 11:09:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:30.278 11:09:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.278 11:09:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.278 11:09:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.278 11:09:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.278 11:09:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.278 11:09:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.278 11:09:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.278 11:09:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.278 11:09:58 -- paths/export.sh@5 -- # export PATH 00:22:30.278 11:09:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.278 11:09:58 -- nvmf/common.sh@47 -- # : 0 00:22:30.278 11:09:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.278 11:09:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.278 11:09:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.278 11:09:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.278 11:09:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.278 11:09:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.278 11:09:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.278 11:09:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.278 11:09:58 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:22:30.278 11:09:58 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:22:30.278 11:09:58 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.278 11:09:58 -- target/queue_depth.sh@19 -- # nvmftestinit 00:22:30.278 11:09:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:30.278 11:09:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.278 11:09:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:30.278 11:09:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:30.278 11:09:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:30.278 11:09:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.278 11:09:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.278 11:09:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.278 11:09:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:30.278 11:09:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:30.278 11:09:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:30.278 11:09:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:30.278 11:09:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:30.278 11:09:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:30.278 11:09:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.278 11:09:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.278 11:09:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:30.278 11:09:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:30.278 11:09:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:30.278 11:09:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:30.278 11:09:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:30.278 11:09:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.278 11:09:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:30.278 11:09:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:30.278 11:09:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:30.278 11:09:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:30.279 11:09:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:30.279 11:09:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:30.279 Cannot find device "nvmf_tgt_br" 00:22:30.279 11:09:58 -- nvmf/common.sh@155 -- # true 00:22:30.279 11:09:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:30.279 Cannot find device "nvmf_tgt_br2" 00:22:30.279 11:09:58 -- nvmf/common.sh@156 -- # true 00:22:30.279 11:09:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:30.279 11:09:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:30.279 Cannot find device "nvmf_tgt_br" 00:22:30.279 11:09:58 -- nvmf/common.sh@158 -- # true 00:22:30.279 11:09:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:30.279 Cannot find device "nvmf_tgt_br2" 00:22:30.279 11:09:58 -- nvmf/common.sh@159 -- # true 00:22:30.279 11:09:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:30.537 11:09:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:30.537 11:09:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:30.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:30.537 11:09:58 -- nvmf/common.sh@162 -- # true 00:22:30.537 11:09:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:30.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:30.537 11:09:58 -- nvmf/common.sh@163 -- # true 00:22:30.537 11:09:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:30.537 11:09:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:30.537 11:09:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:30.537 11:09:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:30.537 11:09:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:30.537 11:09:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:30.537 11:09:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:30.537 11:09:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:30.537 11:09:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:30.537 11:09:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:30.537 11:09:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:30.537 11:09:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:30.537 11:09:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:30.537 11:09:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:30.537 11:09:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:30.537 11:09:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:30.537 11:09:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:30.537 11:09:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:30.537 11:09:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:30.537 11:09:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:30.537 11:09:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:30.537 11:09:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:30.537 11:09:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:30.537 11:09:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:30.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:22:30.537 00:22:30.537 --- 10.0.0.2 ping statistics --- 00:22:30.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.537 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:30.537 11:09:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:30.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:30.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:30.537 00:22:30.537 --- 10.0.0.3 ping statistics --- 00:22:30.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.537 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:30.537 11:09:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:30.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:30.537 00:22:30.537 --- 10.0.0.1 ping statistics --- 00:22:30.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.537 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:30.537 11:09:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.537 11:09:59 -- nvmf/common.sh@422 -- # return 0 00:22:30.537 11:09:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:30.537 11:09:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.537 11:09:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:30.537 11:09:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:30.537 11:09:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.537 11:09:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:30.537 11:09:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:30.537 11:09:59 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:30.537 11:09:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:30.537 11:09:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:30.537 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.537 11:09:59 -- nvmf/common.sh@470 -- # nvmfpid=90247 00:22:30.537 11:09:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:30.537 11:09:59 -- nvmf/common.sh@471 -- # waitforlisten 90247 00:22:30.537 11:09:59 -- common/autotest_common.sh@817 -- # '[' -z 90247 ']' 00:22:30.537 11:09:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.537 11:09:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:30.537 11:09:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.537 11:09:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:30.537 11:09:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.795 [2024-04-18 11:09:59.213846] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:30.795 [2024-04-18 11:09:59.213936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.795 [2024-04-18 11:09:59.356522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.052 [2024-04-18 11:09:59.448322] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.052 [2024-04-18 11:09:59.448379] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.052 [2024-04-18 11:09:59.448391] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.052 [2024-04-18 11:09:59.448399] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.052 [2024-04-18 11:09:59.448408] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.052 [2024-04-18 11:09:59.448436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.617 11:10:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:31.617 11:10:00 -- common/autotest_common.sh@850 -- # return 0 00:22:31.617 11:10:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:31.617 11:10:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:31.617 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.617 11:10:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.617 11:10:00 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.617 11:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.617 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.617 [2024-04-18 11:10:00.251183] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.617 11:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.617 11:10:00 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:31.617 11:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.617 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.875 Malloc0 00:22:31.875 11:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.875 11:10:00 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.875 11:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.875 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.875 11:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.875 11:10:00 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.875 11:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.875 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.875 11:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.875 11:10:00 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.875 11:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.875 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.875 [2024-04-18 11:10:00.306384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.875 11:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.875 11:10:00 -- target/queue_depth.sh@30 -- # bdevperf_pid=90297 00:22:31.875 11:10:00 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.875 11:10:00 -- target/queue_depth.sh@33 -- # waitforlisten 90297 /var/tmp/bdevperf.sock 00:22:31.875 11:10:00 -- common/autotest_common.sh@817 -- # '[' -z 90297 ']' 00:22:31.875 11:10:00 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:31.875 11:10:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.875 11:10:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:31.875 11:10:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.875 11:10:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:31.875 11:10:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.875 [2024-04-18 11:10:00.364248] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:31.875 [2024-04-18 11:10:00.364347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90297 ] 00:22:31.875 [2024-04-18 11:10:00.503765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.133 [2024-04-18 11:10:00.604272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.698 11:10:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:32.698 11:10:01 -- common/autotest_common.sh@850 -- # return 0 00:22:32.699 11:10:01 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:32.699 11:10:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.699 11:10:01 -- common/autotest_common.sh@10 -- # set +x 00:22:32.957 NVMe0n1 00:22:32.957 11:10:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.957 11:10:01 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.957 Running I/O for 10 seconds... 00:22:45.159 00:22:45.159 Latency(us) 00:22:45.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.159 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:45.159 Verification LBA range: start 0x0 length 0x4000 00:22:45.159 NVMe0n1 : 10.10 8710.41 34.03 0.00 0.00 117041.31 28001.75 80073.08 00:22:45.159 =================================================================================================================== 00:22:45.159 Total : 8710.41 34.03 0.00 0.00 117041.31 28001.75 80073.08 00:22:45.159 0 00:22:45.159 11:10:11 -- target/queue_depth.sh@39 -- # killprocess 90297 00:22:45.159 11:10:11 -- common/autotest_common.sh@936 -- # '[' -z 90297 ']' 00:22:45.159 11:10:11 -- common/autotest_common.sh@940 -- # kill -0 90297 00:22:45.159 11:10:11 -- common/autotest_common.sh@941 -- # uname 00:22:45.159 11:10:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:45.159 11:10:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90297 00:22:45.159 killing process with pid 90297 00:22:45.159 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.159 00:22:45.159 Latency(us) 00:22:45.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.159 =================================================================================================================== 00:22:45.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.159 11:10:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:45.159 11:10:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:45.159 11:10:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90297' 00:22:45.159 11:10:11 -- common/autotest_common.sh@955 -- # kill 90297 00:22:45.159 11:10:11 -- common/autotest_common.sh@960 -- # wait 90297 00:22:45.159 11:10:11 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:45.159 11:10:11 -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:45.159 11:10:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:45.159 11:10:11 -- nvmf/common.sh@117 -- # sync 00:22:45.159 11:10:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.159 11:10:11 -- nvmf/common.sh@120 -- # set +e 00:22:45.159 11:10:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.159 11:10:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.159 rmmod nvme_tcp 00:22:45.159 rmmod nvme_fabrics 00:22:45.159 rmmod nvme_keyring 00:22:45.159 11:10:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.159 11:10:11 -- nvmf/common.sh@124 -- # set -e 00:22:45.159 11:10:11 -- nvmf/common.sh@125 -- # return 0 00:22:45.159 11:10:11 -- nvmf/common.sh@478 -- # '[' -n 90247 ']' 00:22:45.159 11:10:11 -- nvmf/common.sh@479 -- # killprocess 90247 00:22:45.159 11:10:11 -- common/autotest_common.sh@936 -- # '[' -z 90247 ']' 00:22:45.160 11:10:11 -- common/autotest_common.sh@940 -- # kill -0 90247 00:22:45.160 11:10:11 -- common/autotest_common.sh@941 -- # uname 00:22:45.160 11:10:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:45.160 11:10:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90247 00:22:45.160 killing process with pid 90247 00:22:45.160 11:10:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:45.160 11:10:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:45.160 11:10:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90247' 00:22:45.160 11:10:12 -- common/autotest_common.sh@955 -- # kill 90247 00:22:45.160 11:10:12 -- common/autotest_common.sh@960 -- # wait 90247 00:22:45.160 11:10:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:45.160 11:10:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:45.160 11:10:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.160 11:10:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.160 11:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.160 11:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.160 11:10:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:45.160 00:22:45.160 real 0m13.543s 00:22:45.160 user 0m23.587s 00:22:45.160 sys 0m1.937s 00:22:45.160 11:10:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:45.160 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:22:45.160 ************************************ 00:22:45.160 END TEST nvmf_queue_depth 00:22:45.160 ************************************ 00:22:45.160 11:10:12 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:45.160 11:10:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:45.160 11:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:45.160 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:22:45.160 ************************************ 00:22:45.160 START TEST nvmf_multipath 00:22:45.160 ************************************ 00:22:45.160 11:10:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:45.160 * Looking for test storage... 00:22:45.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:45.160 11:10:12 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:45.160 11:10:12 -- nvmf/common.sh@7 -- # uname -s 00:22:45.160 11:10:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.160 11:10:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.160 11:10:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.160 11:10:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.160 11:10:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.160 11:10:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.160 11:10:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.160 11:10:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.160 11:10:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.160 11:10:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:22:45.160 11:10:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:22:45.160 11:10:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.160 11:10:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.160 11:10:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:45.160 11:10:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.160 11:10:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:45.160 11:10:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.160 11:10:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.160 11:10:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.160 11:10:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.160 11:10:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.160 11:10:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.160 11:10:12 -- paths/export.sh@5 -- # export PATH 00:22:45.160 11:10:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.160 11:10:12 -- nvmf/common.sh@47 -- # : 0 00:22:45.160 11:10:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.160 11:10:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.160 11:10:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.160 11:10:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.160 11:10:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.160 11:10:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.160 11:10:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.160 11:10:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.160 11:10:12 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.160 11:10:12 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.160 11:10:12 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:45.160 11:10:12 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.160 11:10:12 -- target/multipath.sh@43 -- # nvmftestinit 00:22:45.160 11:10:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:45.160 11:10:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.160 11:10:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:45.160 11:10:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:45.160 11:10:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:45.160 11:10:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.160 11:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.160 11:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.160 11:10:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:45.160 11:10:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:45.160 11:10:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.160 11:10:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.160 11:10:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:45.160 11:10:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:45.160 11:10:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:45.160 11:10:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:45.160 11:10:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:45.160 11:10:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.160 11:10:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:45.160 11:10:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:45.160 11:10:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:45.160 11:10:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:45.160 11:10:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:45.160 11:10:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:45.160 Cannot find device "nvmf_tgt_br" 00:22:45.160 11:10:12 -- nvmf/common.sh@155 -- # true 00:22:45.160 11:10:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.160 Cannot find device "nvmf_tgt_br2" 00:22:45.160 11:10:12 -- nvmf/common.sh@156 -- # true 00:22:45.160 11:10:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:45.160 11:10:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:45.160 Cannot find device "nvmf_tgt_br" 00:22:45.160 11:10:12 -- nvmf/common.sh@158 -- # true 00:22:45.160 11:10:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:45.160 Cannot find device "nvmf_tgt_br2" 00:22:45.160 11:10:12 -- nvmf/common.sh@159 -- # true 00:22:45.160 11:10:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:45.160 11:10:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:45.160 11:10:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.160 11:10:12 -- nvmf/common.sh@162 -- # true 00:22:45.160 11:10:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.160 11:10:12 -- nvmf/common.sh@163 -- # true 00:22:45.161 11:10:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:45.161 11:10:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:45.161 11:10:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:45.161 11:10:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:45.161 11:10:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:45.161 11:10:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:45.161 11:10:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:45.161 11:10:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:45.161 11:10:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:45.161 11:10:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:45.161 11:10:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:45.161 11:10:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:45.161 11:10:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:45.161 11:10:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:45.161 11:10:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:45.161 11:10:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:45.161 11:10:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:45.161 11:10:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:45.161 11:10:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:45.161 11:10:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:45.161 11:10:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:45.161 11:10:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:45.161 11:10:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:45.161 11:10:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:45.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:22:45.161 00:22:45.161 --- 10.0.0.2 ping statistics --- 00:22:45.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.161 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:45.161 11:10:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:45.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:45.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:22:45.161 00:22:45.161 --- 10.0.0.3 ping statistics --- 00:22:45.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.161 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:45.161 11:10:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:45.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:45.161 00:22:45.161 --- 10.0.0.1 ping statistics --- 00:22:45.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.161 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:45.161 11:10:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.161 11:10:12 -- nvmf/common.sh@422 -- # return 0 00:22:45.161 11:10:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:45.161 11:10:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.161 11:10:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:45.161 11:10:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:45.161 11:10:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.161 11:10:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:45.161 11:10:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:45.161 11:10:12 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:22:45.161 11:10:12 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:22:45.161 11:10:12 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:22:45.161 11:10:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:45.161 11:10:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:45.161 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:22:45.161 11:10:12 -- nvmf/common.sh@470 -- # nvmfpid=90627 00:22:45.161 11:10:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.161 11:10:12 -- nvmf/common.sh@471 -- # waitforlisten 90627 00:22:45.161 11:10:12 -- common/autotest_common.sh@817 -- # '[' -z 90627 ']' 00:22:45.161 11:10:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.161 11:10:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:45.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.161 11:10:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.161 11:10:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:45.161 11:10:12 -- common/autotest_common.sh@10 -- # set +x 00:22:45.161 [2024-04-18 11:10:12.940860] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:45.161 [2024-04-18 11:10:12.941240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.161 [2024-04-18 11:10:13.085856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.161 [2024-04-18 11:10:13.172868] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.161 [2024-04-18 11:10:13.172924] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.161 [2024-04-18 11:10:13.172936] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.161 [2024-04-18 11:10:13.172945] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.161 [2024-04-18 11:10:13.172953] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.161 [2024-04-18 11:10:13.173121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.161 [2024-04-18 11:10:13.173201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.161 [2024-04-18 11:10:13.173946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.161 [2024-04-18 11:10:13.173916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.420 11:10:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:45.420 11:10:13 -- common/autotest_common.sh@850 -- # return 0 00:22:45.420 11:10:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:45.420 11:10:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:45.420 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:22:45.420 11:10:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.420 11:10:13 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:45.678 [2024-04-18 11:10:14.177672] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.678 11:10:14 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:45.936 Malloc0 00:22:45.936 11:10:14 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:22:46.195 11:10:14 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:46.453 11:10:15 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.711 [2024-04-18 11:10:15.316999] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.711 11:10:15 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:46.969 [2024-04-18 11:10:15.553210] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:46.969 11:10:15 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:22:47.226 11:10:15 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:22:47.484 11:10:15 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:22:47.485 11:10:15 -- common/autotest_common.sh@1184 -- # local i=0 00:22:47.485 11:10:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:47.485 11:10:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:47.485 11:10:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:49.392 11:10:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:49.392 11:10:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:49.392 11:10:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:49.392 11:10:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:49.392 11:10:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:49.392 11:10:18 -- common/autotest_common.sh@1194 -- # return 0 00:22:49.392 11:10:18 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:22:49.392 11:10:18 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:22:49.392 11:10:18 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:22:49.392 11:10:18 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:22:49.392 11:10:18 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:22:49.392 11:10:18 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:22:49.392 11:10:18 -- target/multipath.sh@38 -- # return 0 00:22:49.392 11:10:18 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:22:49.392 11:10:18 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:22:49.392 11:10:18 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:22:49.392 11:10:18 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:22:49.392 11:10:18 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:22:49.392 11:10:18 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:22:49.392 11:10:18 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:22:49.392 11:10:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:22:49.392 11:10:18 -- target/multipath.sh@22 -- # local timeout=20 00:22:49.392 11:10:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:49.392 11:10:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:49.392 11:10:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:49.392 11:10:18 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:22:49.392 11:10:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:22:49.392 11:10:18 -- target/multipath.sh@22 -- # local timeout=20 00:22:49.392 11:10:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:49.392 11:10:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:49.392 11:10:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:49.392 11:10:18 -- target/multipath.sh@85 -- # echo numa 00:22:49.392 11:10:18 -- target/multipath.sh@88 -- # fio_pid=90769 00:22:49.392 11:10:18 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:22:49.392 11:10:18 -- target/multipath.sh@90 -- # sleep 1 00:22:49.392 [global] 00:22:49.392 thread=1 00:22:49.392 invalidate=1 00:22:49.392 rw=randrw 00:22:49.392 time_based=1 00:22:49.392 runtime=6 00:22:49.392 ioengine=libaio 00:22:49.392 direct=1 00:22:49.392 bs=4096 00:22:49.392 iodepth=128 00:22:49.392 norandommap=0 00:22:49.392 numjobs=1 00:22:49.392 00:22:49.650 verify_dump=1 00:22:49.651 verify_backlog=512 00:22:49.651 verify_state_save=0 00:22:49.651 do_verify=1 00:22:49.651 verify=crc32c-intel 00:22:49.651 [job0] 00:22:49.651 filename=/dev/nvme0n1 00:22:49.651 Could not set queue depth (nvme0n1) 00:22:49.651 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:49.651 fio-3.35 00:22:49.651 Starting 1 thread 00:22:50.585 11:10:19 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:50.843 11:10:19 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:51.102 11:10:19 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:22:51.102 11:10:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:22:51.102 11:10:19 -- target/multipath.sh@22 -- # local timeout=20 00:22:51.102 11:10:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:51.102 11:10:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:51.102 11:10:19 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:51.102 11:10:19 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:22:51.102 11:10:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:22:51.102 11:10:19 -- target/multipath.sh@22 -- # local timeout=20 00:22:51.102 11:10:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:51.102 11:10:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:51.102 11:10:19 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:51.102 11:10:19 -- target/multipath.sh@25 -- # sleep 1s 00:22:52.036 11:10:20 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:52.036 11:10:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:52.036 11:10:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:52.036 11:10:20 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:52.294 11:10:20 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:52.564 11:10:21 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:22:52.564 11:10:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:22:52.564 11:10:21 -- target/multipath.sh@22 -- # local timeout=20 00:22:52.564 11:10:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:52.564 11:10:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:52.564 11:10:21 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:52.564 11:10:21 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:22:52.564 11:10:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:22:52.564 11:10:21 -- target/multipath.sh@22 -- # local timeout=20 00:22:52.564 11:10:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:52.564 11:10:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:52.564 11:10:21 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:52.564 11:10:21 -- target/multipath.sh@25 -- # sleep 1s 00:22:53.509 11:10:22 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:53.509 11:10:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:53.509 11:10:22 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:53.509 11:10:22 -- target/multipath.sh@104 -- # wait 90769 00:22:56.039 00:22:56.039 job0: (groupid=0, jobs=1): err= 0: pid=90793: Thu Apr 18 11:10:24 2024 00:22:56.039 read: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(251MiB/6006msec) 00:22:56.039 slat (usec): min=3, max=6956, avg=53.60, stdev=241.66 00:22:56.039 clat (usec): min=1385, max=14408, avg=8094.97, stdev=1231.24 00:22:56.039 lat (usec): min=1451, max=14417, avg=8148.57, stdev=1241.25 00:22:56.039 clat percentiles (usec): 00:22:56.039 | 1.00th=[ 4752], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7373], 00:22:56.039 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8225], 00:22:56.039 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[10290], 00:22:56.039 | 99.00th=[11863], 99.50th=[12256], 99.90th=[13042], 99.95th=[13304], 00:22:56.039 | 99.99th=[14091] 00:22:56.039 bw ( KiB/s): min=10496, max=27960, per=53.50%, avg=22923.64, stdev=5955.33, samples=11 00:22:56.039 iops : min= 2624, max= 6990, avg=5730.91, stdev=1488.83, samples=11 00:22:56.039 write: IOPS=6378, BW=24.9MiB/s (26.1MB/s)(136MiB/5456msec); 0 zone resets 00:22:56.039 slat (usec): min=4, max=2580, avg=63.65, stdev=165.60 00:22:56.039 clat (usec): min=777, max=13200, avg=6930.03, stdev=1028.68 00:22:56.039 lat (usec): min=817, max=13229, avg=6993.67, stdev=1032.41 00:22:56.039 clat percentiles (usec): 00:22:56.039 | 1.00th=[ 3785], 5.00th=[ 5080], 10.00th=[ 5932], 20.00th=[ 6325], 00:22:56.039 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:22:56.039 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:22:56.039 | 99.00th=[10290], 99.50th=[10814], 99.90th=[12256], 99.95th=[12518], 00:22:56.039 | 99.99th=[13042] 00:22:56.039 bw ( KiB/s): min=10632, max=28920, per=89.82%, avg=22916.36, stdev=5859.69, samples=11 00:22:56.039 iops : min= 2658, max= 7230, avg=5729.09, stdev=1464.98, samples=11 00:22:56.039 lat (usec) : 1000=0.01% 00:22:56.039 lat (msec) : 2=0.01%, 4=0.67%, 10=95.05%, 20=4.26% 00:22:56.039 cpu : usr=5.63%, sys=22.03%, ctx=6351, majf=0, minf=90 00:22:56.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:56.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:56.039 issued rwts: total=64332,34802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:56.039 00:22:56.039 Run status group 0 (all jobs): 00:22:56.039 READ: bw=41.8MiB/s (43.9MB/s), 41.8MiB/s-41.8MiB/s (43.9MB/s-43.9MB/s), io=251MiB (264MB), run=6006-6006msec 00:22:56.039 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=136MiB (143MB), run=5456-5456msec 00:22:56.039 00:22:56.039 Disk stats (read/write): 00:22:56.039 nvme0n1: ios=63436/34070, merge=0/0, ticks=482813/221153, in_queue=703966, util=98.65% 00:22:56.039 11:10:24 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:56.039 11:10:24 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:56.297 11:10:24 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:22:56.297 11:10:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:22:56.297 11:10:24 -- target/multipath.sh@22 -- # local timeout=20 00:22:56.297 11:10:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:56.297 11:10:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:56.297 11:10:24 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:56.297 11:10:24 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:22:56.297 11:10:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:22:56.297 11:10:24 -- target/multipath.sh@22 -- # local timeout=20 00:22:56.297 11:10:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:56.297 11:10:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:56.297 11:10:24 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:22:56.297 11:10:24 -- target/multipath.sh@25 -- # sleep 1s 00:22:57.260 11:10:25 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:57.260 11:10:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:57.260 11:10:25 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:22:57.261 11:10:25 -- target/multipath.sh@113 -- # echo round-robin 00:22:57.261 11:10:25 -- target/multipath.sh@116 -- # fio_pid=90923 00:22:57.261 11:10:25 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:22:57.261 11:10:25 -- target/multipath.sh@118 -- # sleep 1 00:22:57.261 [global] 00:22:57.261 thread=1 00:22:57.261 invalidate=1 00:22:57.261 rw=randrw 00:22:57.261 time_based=1 00:22:57.261 runtime=6 00:22:57.261 ioengine=libaio 00:22:57.261 direct=1 00:22:57.261 bs=4096 00:22:57.261 iodepth=128 00:22:57.261 norandommap=0 00:22:57.261 numjobs=1 00:22:57.261 00:22:57.261 verify_dump=1 00:22:57.261 verify_backlog=512 00:22:57.261 verify_state_save=0 00:22:57.261 do_verify=1 00:22:57.261 verify=crc32c-intel 00:22:57.261 [job0] 00:22:57.261 filename=/dev/nvme0n1 00:22:57.261 Could not set queue depth (nvme0n1) 00:22:57.519 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:57.519 fio-3.35 00:22:57.519 Starting 1 thread 00:22:58.452 11:10:26 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:58.710 11:10:27 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:58.969 11:10:27 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:22:58.969 11:10:27 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:22:58.969 11:10:27 -- target/multipath.sh@22 -- # local timeout=20 00:22:58.969 11:10:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:22:58.969 11:10:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:22:58.969 11:10:27 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:22:58.969 11:10:27 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:22:58.969 11:10:27 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:22:58.969 11:10:27 -- target/multipath.sh@22 -- # local timeout=20 00:22:58.969 11:10:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:22:58.969 11:10:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:58.969 11:10:27 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:58.969 11:10:27 -- target/multipath.sh@25 -- # sleep 1s 00:22:59.904 11:10:28 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:22:59.904 11:10:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:22:59.904 11:10:28 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:22:59.904 11:10:28 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:00.162 11:10:28 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:00.478 11:10:28 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:23:00.478 11:10:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:23:00.478 11:10:28 -- target/multipath.sh@22 -- # local timeout=20 00:23:00.478 11:10:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:00.478 11:10:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:00.478 11:10:28 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:00.478 11:10:28 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:23:00.478 11:10:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:23:00.478 11:10:28 -- target/multipath.sh@22 -- # local timeout=20 00:23:00.478 11:10:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:00.478 11:10:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:00.478 11:10:28 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:00.478 11:10:28 -- target/multipath.sh@25 -- # sleep 1s 00:23:01.415 11:10:29 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:01.415 11:10:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:01.415 11:10:29 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:01.415 11:10:29 -- target/multipath.sh@132 -- # wait 90923 00:23:03.945 00:23:03.945 job0: (groupid=0, jobs=1): err= 0: pid=90944: Thu Apr 18 11:10:32 2024 00:23:03.945 read: IOPS=10.9k, BW=42.7MiB/s (44.8MB/s)(256MiB/6000msec) 00:23:03.945 slat (usec): min=4, max=6481, avg=45.95, stdev=227.12 00:23:03.945 clat (usec): min=451, max=20866, avg=8043.27, stdev=2411.15 00:23:03.945 lat (usec): min=467, max=20873, avg=8089.22, stdev=2423.92 00:23:03.945 clat percentiles (usec): 00:23:03.945 | 1.00th=[ 1811], 5.00th=[ 3818], 10.00th=[ 5145], 20.00th=[ 6718], 00:23:03.945 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:23:03.945 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[10552], 95.00th=[12125], 00:23:03.945 | 99.00th=[16188], 99.50th=[17171], 99.90th=[19006], 99.95th=[19530], 00:23:03.945 | 99.99th=[20055] 00:23:03.945 bw ( KiB/s): min=10160, max=36047, per=52.30%, avg=22892.27, stdev=7635.56, samples=11 00:23:03.945 iops : min= 2540, max= 9011, avg=5723.00, stdev=1908.76, samples=11 00:23:03.945 write: IOPS=6589, BW=25.7MiB/s (27.0MB/s)(135MiB/5244msec); 0 zone resets 00:23:03.945 slat (usec): min=4, max=5156, avg=55.54, stdev=147.22 00:23:03.945 clat (usec): min=222, max=19453, avg=6742.95, stdev=2435.60 00:23:03.945 lat (usec): min=251, max=19475, avg=6798.49, stdev=2442.72 00:23:03.945 clat percentiles (usec): 00:23:03.945 | 1.00th=[ 1057], 5.00th=[ 2573], 10.00th=[ 3490], 20.00th=[ 4948], 00:23:03.945 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7242], 00:23:03.945 | 70.00th=[ 7570], 80.00th=[ 8029], 90.00th=[ 8979], 95.00th=[10683], 00:23:03.945 | 99.00th=[15008], 99.50th=[15795], 99.90th=[17433], 99.95th=[17695], 00:23:03.945 | 99.99th=[18744] 00:23:03.945 bw ( KiB/s): min=10544, max=36950, per=86.97%, avg=22924.18, stdev=7480.26, samples=11 00:23:03.945 iops : min= 2636, max= 9237, avg=5731.00, stdev=1869.97, samples=11 00:23:03.945 lat (usec) : 250=0.01%, 500=0.05%, 750=0.14%, 1000=0.26% 00:23:03.945 lat (msec) : 2=1.57%, 4=6.26%, 10=80.90%, 20=10.81%, 50=0.01% 00:23:03.945 cpu : usr=5.66%, sys=23.41%, ctx=6901, majf=0, minf=133 00:23:03.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:03.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:03.946 issued rwts: total=65654,34556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:03.946 00:23:03.946 Run status group 0 (all jobs): 00:23:03.946 READ: bw=42.7MiB/s (44.8MB/s), 42.7MiB/s-42.7MiB/s (44.8MB/s-44.8MB/s), io=256MiB (269MB), run=6000-6000msec 00:23:03.946 WRITE: bw=25.7MiB/s (27.0MB/s), 25.7MiB/s-25.7MiB/s (27.0MB/s-27.0MB/s), io=135MiB (142MB), run=5244-5244msec 00:23:03.946 00:23:03.946 Disk stats (read/write): 00:23:03.946 nvme0n1: ios=64805/33701, merge=0/0, ticks=489597/211694, in_queue=701291, util=98.62% 00:23:03.946 11:10:32 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:03.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:03.946 11:10:32 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:03.946 11:10:32 -- common/autotest_common.sh@1205 -- # local i=0 00:23:03.946 11:10:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:03.946 11:10:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:03.946 11:10:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:03.946 11:10:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:03.946 11:10:32 -- common/autotest_common.sh@1217 -- # return 0 00:23:03.946 11:10:32 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.946 11:10:32 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:23:04.203 11:10:32 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:23:04.203 11:10:32 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:23:04.203 11:10:32 -- target/multipath.sh@144 -- # nvmftestfini 00:23:04.203 11:10:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:04.203 11:10:32 -- nvmf/common.sh@117 -- # sync 00:23:04.203 11:10:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.203 11:10:32 -- nvmf/common.sh@120 -- # set +e 00:23:04.203 11:10:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.203 11:10:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.203 rmmod nvme_tcp 00:23:04.203 rmmod nvme_fabrics 00:23:04.203 rmmod nvme_keyring 00:23:04.203 11:10:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.203 11:10:32 -- nvmf/common.sh@124 -- # set -e 00:23:04.203 11:10:32 -- nvmf/common.sh@125 -- # return 0 00:23:04.203 11:10:32 -- nvmf/common.sh@478 -- # '[' -n 90627 ']' 00:23:04.203 11:10:32 -- nvmf/common.sh@479 -- # killprocess 90627 00:23:04.203 11:10:32 -- common/autotest_common.sh@936 -- # '[' -z 90627 ']' 00:23:04.203 11:10:32 -- common/autotest_common.sh@940 -- # kill -0 90627 00:23:04.203 11:10:32 -- common/autotest_common.sh@941 -- # uname 00:23:04.203 11:10:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:04.203 11:10:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90627 00:23:04.203 11:10:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:04.203 killing process with pid 90627 00:23:04.203 11:10:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:04.203 11:10:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90627' 00:23:04.203 11:10:32 -- common/autotest_common.sh@955 -- # kill 90627 00:23:04.203 11:10:32 -- common/autotest_common.sh@960 -- # wait 90627 00:23:04.460 11:10:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:04.460 11:10:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:04.460 11:10:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:04.460 11:10:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.460 11:10:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.460 11:10:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.460 11:10:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.460 11:10:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.460 11:10:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:04.460 ************************************ 00:23:04.460 END TEST nvmf_multipath 00:23:04.460 ************************************ 00:23:04.460 00:23:04.460 real 0m20.694s 00:23:04.460 user 1m21.158s 00:23:04.460 sys 0m6.359s 00:23:04.460 11:10:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:04.460 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:04.754 11:10:33 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:23:04.754 11:10:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:04.754 11:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:04.754 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:04.754 ************************************ 00:23:04.754 START TEST nvmf_zcopy 00:23:04.754 ************************************ 00:23:04.754 11:10:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:23:04.754 * Looking for test storage... 00:23:04.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:04.754 11:10:33 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.754 11:10:33 -- nvmf/common.sh@7 -- # uname -s 00:23:04.754 11:10:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.755 11:10:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.755 11:10:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.755 11:10:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.755 11:10:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.755 11:10:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.755 11:10:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.755 11:10:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.755 11:10:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.755 11:10:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.755 11:10:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:04.755 11:10:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:04.755 11:10:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.755 11:10:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.755 11:10:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.755 11:10:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.755 11:10:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.755 11:10:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.755 11:10:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.755 11:10:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.755 11:10:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.755 11:10:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.755 11:10:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.755 11:10:33 -- paths/export.sh@5 -- # export PATH 00:23:04.755 11:10:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.755 11:10:33 -- nvmf/common.sh@47 -- # : 0 00:23:04.755 11:10:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.755 11:10:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.755 11:10:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.755 11:10:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.755 11:10:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.755 11:10:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.755 11:10:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.755 11:10:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.755 11:10:33 -- target/zcopy.sh@12 -- # nvmftestinit 00:23:04.755 11:10:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:04.755 11:10:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.755 11:10:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:04.755 11:10:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:04.755 11:10:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:04.755 11:10:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.755 11:10:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.755 11:10:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.755 11:10:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:04.755 11:10:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:04.755 11:10:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:04.755 11:10:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:04.755 11:10:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:04.755 11:10:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:04.755 11:10:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.755 11:10:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.755 11:10:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:04.755 11:10:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:04.755 11:10:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.755 11:10:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.755 11:10:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.755 11:10:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.755 11:10:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.755 11:10:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.755 11:10:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.755 11:10:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.755 11:10:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:04.755 11:10:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:04.755 Cannot find device "nvmf_tgt_br" 00:23:04.755 11:10:33 -- nvmf/common.sh@155 -- # true 00:23:04.755 11:10:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.755 Cannot find device "nvmf_tgt_br2" 00:23:04.755 11:10:33 -- nvmf/common.sh@156 -- # true 00:23:04.755 11:10:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:04.755 11:10:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:04.755 Cannot find device "nvmf_tgt_br" 00:23:04.755 11:10:33 -- nvmf/common.sh@158 -- # true 00:23:04.755 11:10:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:04.755 Cannot find device "nvmf_tgt_br2" 00:23:04.755 11:10:33 -- nvmf/common.sh@159 -- # true 00:23:04.755 11:10:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.014 11:10:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.014 11:10:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.014 11:10:33 -- nvmf/common.sh@162 -- # true 00:23:05.014 11:10:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.014 11:10:33 -- nvmf/common.sh@163 -- # true 00:23:05.014 11:10:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.014 11:10:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.014 11:10:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.014 11:10:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.014 11:10:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.014 11:10:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.014 11:10:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.014 11:10:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.014 11:10:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.014 11:10:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.014 11:10:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.014 11:10:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.014 11:10:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.014 11:10:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.014 11:10:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.014 11:10:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.014 11:10:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.014 11:10:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.014 11:10:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.014 11:10:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.272 11:10:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.272 11:10:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.272 11:10:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.272 11:10:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:05.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:23:05.272 00:23:05.272 --- 10.0.0.2 ping statistics --- 00:23:05.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.272 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:05.272 11:10:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:05.272 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.272 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:23:05.272 00:23:05.272 --- 10.0.0.3 ping statistics --- 00:23:05.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.272 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:05.272 11:10:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:05.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:05.272 00:23:05.272 --- 10.0.0.1 ping statistics --- 00:23:05.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.272 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:05.272 11:10:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.272 11:10:33 -- nvmf/common.sh@422 -- # return 0 00:23:05.272 11:10:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:05.272 11:10:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.272 11:10:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:05.272 11:10:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:05.272 11:10:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.272 11:10:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:05.272 11:10:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:05.272 11:10:33 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:23:05.272 11:10:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:05.272 11:10:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:05.272 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.272 11:10:33 -- nvmf/common.sh@470 -- # nvmfpid=91232 00:23:05.272 11:10:33 -- nvmf/common.sh@471 -- # waitforlisten 91232 00:23:05.272 11:10:33 -- common/autotest_common.sh@817 -- # '[' -z 91232 ']' 00:23:05.272 11:10:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.272 11:10:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.272 11:10:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.272 11:10:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.272 11:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.272 11:10:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.272 [2024-04-18 11:10:33.778096] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:05.272 [2024-04-18 11:10:33.778219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.530 [2024-04-18 11:10:33.919070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.530 [2024-04-18 11:10:34.013409] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.530 [2024-04-18 11:10:34.013472] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.530 [2024-04-18 11:10:34.013487] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.530 [2024-04-18 11:10:34.013498] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.530 [2024-04-18 11:10:34.013508] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.530 [2024-04-18 11:10:34.013553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.094 11:10:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.094 11:10:34 -- common/autotest_common.sh@850 -- # return 0 00:23:06.094 11:10:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:06.094 11:10:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:06.094 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:06.352 11:10:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.352 11:10:34 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:23:06.352 11:10:34 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:23:06.352 11:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.353 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:06.353 [2024-04-18 11:10:34.782928] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.353 11:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.353 11:10:34 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:06.353 11:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.353 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:06.353 11:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.353 11:10:34 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.353 11:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.353 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:06.353 [2024-04-18 11:10:34.799021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.353 11:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.353 11:10:34 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:06.353 11:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.353 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:06.353 11:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.353 11:10:34 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:23:06.353 11:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.353 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:06.353 malloc0 00:23:06.353 11:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.353 11:10:34 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:06.353 11:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.353 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:06.353 11:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.353 11:10:34 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:23:06.353 11:10:34 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:23:06.353 11:10:34 -- nvmf/common.sh@521 -- # config=() 00:23:06.353 11:10:34 -- nvmf/common.sh@521 -- # local subsystem config 00:23:06.353 11:10:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:06.353 11:10:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:06.353 { 00:23:06.353 "params": { 00:23:06.353 "name": "Nvme$subsystem", 00:23:06.353 "trtype": "$TEST_TRANSPORT", 00:23:06.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.353 "adrfam": "ipv4", 00:23:06.353 "trsvcid": "$NVMF_PORT", 00:23:06.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.353 "hdgst": ${hdgst:-false}, 00:23:06.353 "ddgst": ${ddgst:-false} 00:23:06.353 }, 00:23:06.353 "method": "bdev_nvme_attach_controller" 00:23:06.353 } 00:23:06.353 EOF 00:23:06.353 )") 00:23:06.353 11:10:34 -- nvmf/common.sh@543 -- # cat 00:23:06.353 11:10:34 -- nvmf/common.sh@545 -- # jq . 00:23:06.353 11:10:34 -- nvmf/common.sh@546 -- # IFS=, 00:23:06.353 11:10:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:06.353 "params": { 00:23:06.353 "name": "Nvme1", 00:23:06.353 "trtype": "tcp", 00:23:06.353 "traddr": "10.0.0.2", 00:23:06.353 "adrfam": "ipv4", 00:23:06.353 "trsvcid": "4420", 00:23:06.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.353 "hdgst": false, 00:23:06.353 "ddgst": false 00:23:06.353 }, 00:23:06.353 "method": "bdev_nvme_attach_controller" 00:23:06.353 }' 00:23:06.353 [2024-04-18 11:10:34.897427] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:06.353 [2024-04-18 11:10:34.897547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91283 ] 00:23:06.610 [2024-04-18 11:10:35.037454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.610 [2024-04-18 11:10:35.134307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.867 Running I/O for 10 seconds... 00:23:16.833 00:23:16.833 Latency(us) 00:23:16.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.833 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:23:16.833 Verification LBA range: start 0x0 length 0x1000 00:23:16.833 Nvme1n1 : 10.02 5354.54 41.83 0.00 0.00 23835.78 2874.65 30980.65 00:23:16.833 =================================================================================================================== 00:23:16.833 Total : 5354.54 41.83 0.00 0.00 23835.78 2874.65 30980.65 00:23:17.092 11:10:45 -- target/zcopy.sh@39 -- # perfpid=91394 00:23:17.092 11:10:45 -- target/zcopy.sh@41 -- # xtrace_disable 00:23:17.092 11:10:45 -- common/autotest_common.sh@10 -- # set +x 00:23:17.092 11:10:45 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:23:17.092 11:10:45 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:23:17.092 11:10:45 -- nvmf/common.sh@521 -- # config=() 00:23:17.092 11:10:45 -- nvmf/common.sh@521 -- # local subsystem config 00:23:17.092 11:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:17.092 11:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:17.092 { 00:23:17.092 "params": { 00:23:17.092 "name": "Nvme$subsystem", 00:23:17.092 "trtype": "$TEST_TRANSPORT", 00:23:17.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.092 "adrfam": "ipv4", 00:23:17.092 "trsvcid": "$NVMF_PORT", 00:23:17.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.092 "hdgst": ${hdgst:-false}, 00:23:17.092 "ddgst": ${ddgst:-false} 00:23:17.092 }, 00:23:17.092 "method": "bdev_nvme_attach_controller" 00:23:17.092 } 00:23:17.092 EOF 00:23:17.092 )") 00:23:17.092 11:10:45 -- nvmf/common.sh@543 -- # cat 00:23:17.092 [2024-04-18 11:10:45.544214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.544257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 11:10:45 -- nvmf/common.sh@545 -- # jq . 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 11:10:45 -- nvmf/common.sh@546 -- # IFS=, 00:23:17.092 11:10:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:17.092 "params": { 00:23:17.092 "name": "Nvme1", 00:23:17.092 "trtype": "tcp", 00:23:17.092 "traddr": "10.0.0.2", 00:23:17.092 "adrfam": "ipv4", 00:23:17.092 "trsvcid": "4420", 00:23:17.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.092 "hdgst": false, 00:23:17.092 "ddgst": false 00:23:17.092 }, 00:23:17.092 "method": "bdev_nvme_attach_controller" 00:23:17.092 }' 00:23:17.092 [2024-04-18 11:10:45.556175] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.556208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.564170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.564199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.576170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.576199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.588173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.588200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.600198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.600225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 [2024-04-18 11:10:45.603690] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:17.092 [2024-04-18 11:10:45.603808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91394 ] 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.612204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.612231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.624188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.624216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.636188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.636215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.648201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.648227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.092 [2024-04-18 11:10:45.660192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.092 [2024-04-18 11:10:45.660218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.092 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.093 [2024-04-18 11:10:45.672205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-18 11:10:45.672231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.093 [2024-04-18 11:10:45.684209] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-18 11:10:45.684235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.093 [2024-04-18 11:10:45.696210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-18 11:10:45.696236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.093 [2024-04-18 11:10:45.708217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-18 11:10:45.708244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.093 [2024-04-18 11:10:45.720220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.093 [2024-04-18 11:10:45.720247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.093 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.732224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.732252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.744226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.744254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.749472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.351 [2024-04-18 11:10:45.756238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.756267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.768247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.768282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.780245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.780276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.792239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.792266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.804251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.804280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.816250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.816277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.828252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.828286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.836165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.351 [2024-04-18 11:10:45.840268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.840296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.351 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.351 [2024-04-18 11:10:45.852256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.351 [2024-04-18 11:10:45.852283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.864286] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.864315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.876272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.876302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.888275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.888305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.900271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.900298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.912275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.912303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.924283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.924312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.936280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.936307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.948295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.948326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.960299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.960329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.972315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.972348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.352 [2024-04-18 11:10:45.984311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.352 [2024-04-18 11:10:45.984343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.352 2024/04/18 11:10:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:45.996311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:45.996342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.008321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.008354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 Running I/O for 5 seconds... 00:23:17.611 [2024-04-18 11:10:46.020312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.020340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.038188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.038226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.053868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.053901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.071467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.071503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.087100] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.087148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.097489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.097533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.112066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.112126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.127899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.127932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.138387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.138420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.153388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.153424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.166006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.166068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.611 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.611 [2024-04-18 11:10:46.182739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.611 [2024-04-18 11:10:46.182776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.612 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.612 [2024-04-18 11:10:46.198730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.612 [2024-04-18 11:10:46.198766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.612 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.612 [2024-04-18 11:10:46.215699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.612 [2024-04-18 11:10:46.215736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.612 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.612 [2024-04-18 11:10:46.230164] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.612 [2024-04-18 11:10:46.230199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.612 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.612 [2024-04-18 11:10:46.245590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.612 [2024-04-18 11:10:46.245624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.612 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.255809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.255845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.270318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.270354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.286899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.286937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.304651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.304684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.320116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.320151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.330641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.330677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.345886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.345923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.362707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.362743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.379789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.379822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.395808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.395842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.413215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.413249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.429283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.872 [2024-04-18 11:10:46.429317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.872 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.872 [2024-04-18 11:10:46.446175] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.873 [2024-04-18 11:10:46.446208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.873 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.873 [2024-04-18 11:10:46.461757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.873 [2024-04-18 11:10:46.461792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.873 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.873 [2024-04-18 11:10:46.477432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.873 [2024-04-18 11:10:46.477468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.873 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.873 [2024-04-18 11:10:46.488233] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.873 [2024-04-18 11:10:46.488288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.873 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:17.873 [2024-04-18 11:10:46.502956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.873 [2024-04-18 11:10:46.502992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.873 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.513301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.513337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.528100] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.528144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.545193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.545227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.560856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.560889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.576592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.576627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.593013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.593061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.608355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.608389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.624386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.624421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.641980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.642015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.657258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.657309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.674437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.674473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.132 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.132 [2024-04-18 11:10:46.690142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.132 [2024-04-18 11:10:46.690177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.133 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.133 [2024-04-18 11:10:46.707025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.133 [2024-04-18 11:10:46.707105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.133 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.133 [2024-04-18 11:10:46.722809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.133 [2024-04-18 11:10:46.722845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.133 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.133 [2024-04-18 11:10:46.739644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.133 [2024-04-18 11:10:46.739679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.133 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.133 [2024-04-18 11:10:46.755065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.133 [2024-04-18 11:10:46.755100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.133 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.133 [2024-04-18 11:10:46.765354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.133 [2024-04-18 11:10:46.765389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.133 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.779553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.779589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.790487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.790523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.805294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.805333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.822044] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.822109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.838913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.838964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.854617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.854653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.865364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.865401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.880251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.880284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.891150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.891184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.906705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.906743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.922239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.922272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.938536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.938569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.955337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.955372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.971529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.971565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.391 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.391 [2024-04-18 11:10:46.987430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.391 [2024-04-18 11:10:46.987467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.392 2024/04/18 11:10:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.392 [2024-04-18 11:10:46.997375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.392 [2024-04-18 11:10:46.997409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.392 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.392 [2024-04-18 11:10:47.013403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.392 [2024-04-18 11:10:47.013438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.392 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.392 [2024-04-18 11:10:47.029147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.392 [2024-04-18 11:10:47.029181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.392 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.039912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.039945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.054421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.054456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.070058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.070092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.080560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.080593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.095012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.095075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.110979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.111013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.127030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.127107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.137762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.137795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.153221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.153253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.169949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.169983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.186909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.186943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.202459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.202510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.212855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.212890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.228129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.228161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.243705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.243738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.254021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.254080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.268792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.268826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.653 [2024-04-18 11:10:47.278494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.653 [2024-04-18 11:10:47.278529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.653 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.294936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.294972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.310262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.310297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.325949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.326002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.335654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.335689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.351513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.351549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.368726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.368763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.384201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.384239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.394912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.394947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.409473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.409507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.420267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.420302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.434938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.434974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.445513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.445548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.456640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.456674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.473901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.473936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.490237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.490270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.506171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.506203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.919 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.919 [2024-04-18 11:10:47.521871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.919 [2024-04-18 11:10:47.521904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.920 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.920 [2024-04-18 11:10:47.538323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.920 [2024-04-18 11:10:47.538358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.920 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:18.920 [2024-04-18 11:10:47.554908] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.920 [2024-04-18 11:10:47.554943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.920 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.570262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.570295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.587033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.587096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.603886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.603919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.619345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.619380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.636483] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.636516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.652493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.652532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.668946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.668979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.686126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.686158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.701465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.701531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.718399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.718431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.733713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.733747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.749672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.749707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.765366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.765401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.780712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.780748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.797827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.797860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.179 [2024-04-18 11:10:47.812942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.179 [2024-04-18 11:10:47.812975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.179 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.829509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.829545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.846505] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.846544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.862534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.862569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.879477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.879513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.896664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.896699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.912359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.912394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.922330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.922365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.937822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.937856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.953176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.953209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.968619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.968653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:47.986170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:47.986205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:48.002160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.438 [2024-04-18 11:10:48.002195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.438 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.438 [2024-04-18 11:10:48.019133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.439 [2024-04-18 11:10:48.019166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.439 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.439 [2024-04-18 11:10:48.034802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.439 [2024-04-18 11:10:48.034837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.439 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.439 [2024-04-18 11:10:48.045133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.439 [2024-04-18 11:10:48.045181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.439 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.439 [2024-04-18 11:10:48.060215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.439 [2024-04-18 11:10:48.060249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.439 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.439 [2024-04-18 11:10:48.070792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.439 [2024-04-18 11:10:48.070826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.439 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.085731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.085767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.098038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.098099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.115815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.115850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.130758] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.130794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.146287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.146321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.158411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.158446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.175316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.175351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.191187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.191219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.207236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.207293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.224016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.224078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.240091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.240123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.256985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.257018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.272589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.272625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.283184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.283218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.298086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.298137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.310777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.310812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.321069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.321103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.698 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.698 [2024-04-18 11:10:48.336625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.698 [2024-04-18 11:10:48.336661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.352126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.352159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.367856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.367890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.383362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.383407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.393317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.393351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.407666] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.407701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.423799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.423833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.439895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.439930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.449902] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.449938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.466083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.466118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.482644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.482680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.499461] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.499498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.515297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.515333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.531095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.531129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.546828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.546880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.557687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.557729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.573263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.573300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:19.957 [2024-04-18 11:10:48.588918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:19.957 [2024-04-18 11:10:48.588953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:19.957 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.599465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.599501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.614266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.614301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.625021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.625061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.639869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.639904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.656569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.656605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.672099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.672143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.687848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.687883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.697761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.697796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.713986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.714020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.730791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.730826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.747282] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.747333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.763020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.763068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.778988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.779023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.789765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.217 [2024-04-18 11:10:48.789814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.217 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.217 [2024-04-18 11:10:48.804148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.218 [2024-04-18 11:10:48.804184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.218 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.218 [2024-04-18 11:10:48.821324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.218 [2024-04-18 11:10:48.821357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.218 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.218 [2024-04-18 11:10:48.836948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.218 [2024-04-18 11:10:48.836981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.218 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.218 [2024-04-18 11:10:48.846665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.218 [2024-04-18 11:10:48.846698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.218 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.862684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.862719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.879445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.879480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.894913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.894963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.905461] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.905494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.920499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.920533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.937608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.937641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.952829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.952863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.968705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.968752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.980810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.980842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:48.998277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:48.998310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.013750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.013783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.030425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.030458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.046198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.046230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.055885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.055918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.071622] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.071658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.089164] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.089216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.105077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.105138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.477 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.477 [2024-04-18 11:10:49.115575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.477 [2024-04-18 11:10:49.115610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.130118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.130166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.147207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.147240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.162385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.162434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.178154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.178188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.197061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.197098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.211920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.211954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.222186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.222219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.237587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.237621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.256019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.256081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.271572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.271622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.287026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.287089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.302355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.302392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.320051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.320093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.736 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.736 [2024-04-18 11:10:49.336098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.736 [2024-04-18 11:10:49.336131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.737 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.737 [2024-04-18 11:10:49.352071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.737 [2024-04-18 11:10:49.352102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.737 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.737 [2024-04-18 11:10:49.368964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.737 [2024-04-18 11:10:49.368998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.737 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.995 [2024-04-18 11:10:49.385578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.995 [2024-04-18 11:10:49.385612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.995 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.995 [2024-04-18 11:10:49.404178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.995 [2024-04-18 11:10:49.404212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.995 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.995 [2024-04-18 11:10:49.420221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.995 [2024-04-18 11:10:49.420255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.995 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.995 [2024-04-18 11:10:49.435863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.995 [2024-04-18 11:10:49.435897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.995 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.995 [2024-04-18 11:10:49.447842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.995 [2024-04-18 11:10:49.447877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.995 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.995 [2024-04-18 11:10:49.464335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.995 [2024-04-18 11:10:49.464369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.995 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.995 [2024-04-18 11:10:49.481276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.995 [2024-04-18 11:10:49.481310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.496782] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.496816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.508888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.508924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.524977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.525012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.541748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.541798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.558600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.558636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.574331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.574368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.591135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.591171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.606541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.606582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.622181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.622218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:20.996 [2024-04-18 11:10:49.631793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:20.996 [2024-04-18 11:10:49.631828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:20.996 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.254 [2024-04-18 11:10:49.647567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.254 [2024-04-18 11:10:49.647602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.665005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.665052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.680963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.680999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.698960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.698996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.714541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.714577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.729959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.729995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.745921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.745957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.763361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.763396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.780121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.780156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.790629] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.790665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.805412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.805447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.816089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.816139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.831136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.831171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.841607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.841642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.852435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.852469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.868318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.868364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.255 [2024-04-18 11:10:49.885039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.255 [2024-04-18 11:10:49.885098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.255 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.513 [2024-04-18 11:10:49.901188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.513 [2024-04-18 11:10:49.901222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.513 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.513 [2024-04-18 11:10:49.917294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.513 [2024-04-18 11:10:49.917327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.513 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.513 [2024-04-18 11:10:49.934773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.513 [2024-04-18 11:10:49.934810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.513 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.513 [2024-04-18 11:10:49.950301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.513 [2024-04-18 11:10:49.950334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.513 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.513 [2024-04-18 11:10:49.960804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.513 [2024-04-18 11:10:49.960838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.513 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.513 [2024-04-18 11:10:49.975972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.513 [2024-04-18 11:10:49.976008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.513 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.513 [2024-04-18 11:10:49.992542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.513 [2024-04-18 11:10:49.992575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.009543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.009578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.025145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.025181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.040814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.040847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.057287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.057331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.074829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.074863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.090950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.090986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.107014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.107093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.123664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.123701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.139605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.139655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.514 [2024-04-18 11:10:50.150784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.514 [2024-04-18 11:10:50.150821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.514 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.166092] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.166126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.182098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.182132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.199234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.199267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.216920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.216955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.232805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.232844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.248685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.248719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.258313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.258346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.274240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.274272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.289147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.289180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.304983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.305016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.322524] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.322560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.338264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.338298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.355576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.355612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.370964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.371014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.381623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.381656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:21.773 [2024-04-18 11:10:50.396257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:21.773 [2024-04-18 11:10:50.396289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:21.773 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.413766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.413813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.430310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.430367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.445583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.445624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.462926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.462968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.478781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.478827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.495980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.496018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.513088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.513144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.529695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.529741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.546698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.546749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.562989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.563064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.578986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.579035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.596417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.596451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.612570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.612618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.629402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.629443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.645234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.645286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.032 [2024-04-18 11:10:50.661350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.032 [2024-04-18 11:10:50.661393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.032 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.679201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.679245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.695561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.695598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.711560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.711615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.727727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.727775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.737736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.737772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.752838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.752875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.770790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.770843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.786569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.786606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.797326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.797360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.812907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.812944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.828611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.828668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.845587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.291 [2024-04-18 11:10:50.845624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.291 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.291 [2024-04-18 11:10:50.861327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.292 [2024-04-18 11:10:50.861365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.292 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.292 [2024-04-18 11:10:50.872106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.292 [2024-04-18 11:10:50.872140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.292 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.292 [2024-04-18 11:10:50.887234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.292 [2024-04-18 11:10:50.887271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.292 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.292 [2024-04-18 11:10:50.903025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.292 [2024-04-18 11:10:50.903073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.292 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.292 [2024-04-18 11:10:50.918620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.292 [2024-04-18 11:10:50.918667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.292 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:50.934535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:50.934579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:50.944552] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:50.944586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:50.960492] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:50.960530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:50.976686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:50.976737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:50.993580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:50.993615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.009851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.009886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.023912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.023946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 00:23:22.551 Latency(us) 00:23:22.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.551 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:23:22.551 Nvme1n1 : 5.01 11489.50 89.76 0.00 0.00 11124.63 4855.62 20733.21 00:23:22.551 =================================================================================================================== 00:23:22.551 Total : 11489.50 89.76 0.00 0.00 11124.63 4855.62 20733.21 00:23:22.551 [2024-04-18 11:10:51.033517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.033551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.045528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.045567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.057529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.057569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.069555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.069593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.081528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.081565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.093551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.093589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.105559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.105599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.117563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.117603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.129576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.129615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.141576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.141612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.153581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.153619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.165558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.165588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.177605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.177647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.551 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.551 [2024-04-18 11:10:51.189606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.551 [2024-04-18 11:10:51.189649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.809 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.809 [2024-04-18 11:10:51.201592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.809 [2024-04-18 11:10:51.201627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.810 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.810 [2024-04-18 11:10:51.213609] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.810 [2024-04-18 11:10:51.213649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.810 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.810 [2024-04-18 11:10:51.225611] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.810 [2024-04-18 11:10:51.225648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.810 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.810 [2024-04-18 11:10:51.237570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.810 [2024-04-18 11:10:51.237598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.810 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.810 [2024-04-18 11:10:51.249598] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:22.810 [2024-04-18 11:10:51.249625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:22.810 2024/04/18 11:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:22.810 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (91394) - No such process 00:23:22.810 11:10:51 -- target/zcopy.sh@49 -- # wait 91394 00:23:22.810 11:10:51 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:22.810 11:10:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.810 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:23:22.810 11:10:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.810 11:10:51 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:22.810 11:10:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.810 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:23:22.810 delay0 00:23:22.810 11:10:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.810 11:10:51 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:23:22.810 11:10:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.810 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:23:22.810 11:10:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.810 11:10:51 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:23:22.810 [2024-04-18 11:10:51.449262] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:29.402 Initializing NVMe Controllers 00:23:29.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.402 Initialization complete. Launching workers. 00:23:29.402 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 90 00:23:29.402 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 377, failed to submit 33 00:23:29.402 success 207, unsuccess 170, failed 0 00:23:29.402 11:10:57 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:23:29.402 11:10:57 -- target/zcopy.sh@60 -- # nvmftestfini 00:23:29.402 11:10:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:29.402 11:10:57 -- nvmf/common.sh@117 -- # sync 00:23:29.402 11:10:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.402 11:10:57 -- nvmf/common.sh@120 -- # set +e 00:23:29.402 11:10:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.402 11:10:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.402 rmmod nvme_tcp 00:23:29.402 rmmod nvme_fabrics 00:23:29.402 rmmod nvme_keyring 00:23:29.402 11:10:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.402 11:10:57 -- nvmf/common.sh@124 -- # set -e 00:23:29.402 11:10:57 -- nvmf/common.sh@125 -- # return 0 00:23:29.402 11:10:57 -- nvmf/common.sh@478 -- # '[' -n 91232 ']' 00:23:29.403 11:10:57 -- nvmf/common.sh@479 -- # killprocess 91232 00:23:29.403 11:10:57 -- common/autotest_common.sh@936 -- # '[' -z 91232 ']' 00:23:29.403 11:10:57 -- common/autotest_common.sh@940 -- # kill -0 91232 00:23:29.403 11:10:57 -- common/autotest_common.sh@941 -- # uname 00:23:29.403 11:10:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:29.403 11:10:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91232 00:23:29.403 killing process with pid 91232 00:23:29.403 11:10:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:29.403 11:10:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:29.403 11:10:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91232' 00:23:29.403 11:10:57 -- common/autotest_common.sh@955 -- # kill 91232 00:23:29.403 11:10:57 -- common/autotest_common.sh@960 -- # wait 91232 00:23:29.403 11:10:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:29.403 11:10:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:29.403 11:10:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:29.403 11:10:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.403 11:10:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.403 11:10:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.403 11:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.403 11:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.403 11:10:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:29.403 ************************************ 00:23:29.403 END TEST nvmf_zcopy 00:23:29.403 ************************************ 00:23:29.403 00:23:29.403 real 0m24.678s 00:23:29.403 user 0m39.541s 00:23:29.403 sys 0m6.985s 00:23:29.403 11:10:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:29.403 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:23:29.403 11:10:57 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:29.403 11:10:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:29.403 11:10:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:29.403 11:10:57 -- common/autotest_common.sh@10 -- # set +x 00:23:29.403 ************************************ 00:23:29.403 START TEST nvmf_nmic 00:23:29.403 ************************************ 00:23:29.403 11:10:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:29.663 * Looking for test storage... 00:23:29.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:29.663 11:10:58 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.663 11:10:58 -- nvmf/common.sh@7 -- # uname -s 00:23:29.663 11:10:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.663 11:10:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.663 11:10:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.663 11:10:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.663 11:10:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.663 11:10:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.663 11:10:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.663 11:10:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.663 11:10:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.663 11:10:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.663 11:10:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:29.663 11:10:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:29.663 11:10:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.663 11:10:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.663 11:10:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.663 11:10:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.663 11:10:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.663 11:10:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.663 11:10:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.663 11:10:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.663 11:10:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.663 11:10:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.663 11:10:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.663 11:10:58 -- paths/export.sh@5 -- # export PATH 00:23:29.663 11:10:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.663 11:10:58 -- nvmf/common.sh@47 -- # : 0 00:23:29.663 11:10:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.663 11:10:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.663 11:10:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.663 11:10:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.663 11:10:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.663 11:10:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.663 11:10:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.663 11:10:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.663 11:10:58 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:29.663 11:10:58 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:29.663 11:10:58 -- target/nmic.sh@14 -- # nvmftestinit 00:23:29.663 11:10:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:29.663 11:10:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.663 11:10:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:29.663 11:10:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:29.663 11:10:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:29.663 11:10:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.663 11:10:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.663 11:10:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.663 11:10:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:29.663 11:10:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:29.663 11:10:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:29.663 11:10:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:29.663 11:10:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:29.663 11:10:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:29.663 11:10:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.663 11:10:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.663 11:10:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:29.663 11:10:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:29.663 11:10:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.663 11:10:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.663 11:10:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.663 11:10:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.663 11:10:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.663 11:10:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.663 11:10:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.663 11:10:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.663 11:10:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:29.663 11:10:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:29.663 Cannot find device "nvmf_tgt_br" 00:23:29.664 11:10:58 -- nvmf/common.sh@155 -- # true 00:23:29.664 11:10:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.664 Cannot find device "nvmf_tgt_br2" 00:23:29.664 11:10:58 -- nvmf/common.sh@156 -- # true 00:23:29.664 11:10:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:29.664 11:10:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:29.664 Cannot find device "nvmf_tgt_br" 00:23:29.664 11:10:58 -- nvmf/common.sh@158 -- # true 00:23:29.664 11:10:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:29.664 Cannot find device "nvmf_tgt_br2" 00:23:29.664 11:10:58 -- nvmf/common.sh@159 -- # true 00:23:29.664 11:10:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:29.664 11:10:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:29.664 11:10:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.664 11:10:58 -- nvmf/common.sh@162 -- # true 00:23:29.664 11:10:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.664 11:10:58 -- nvmf/common.sh@163 -- # true 00:23:29.664 11:10:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.664 11:10:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.664 11:10:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.664 11:10:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.664 11:10:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.664 11:10:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.664 11:10:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.664 11:10:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.923 11:10:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:29.923 11:10:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:29.923 11:10:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:29.923 11:10:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:29.923 11:10:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:29.923 11:10:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.923 11:10:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.923 11:10:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.923 11:10:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:29.923 11:10:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:29.923 11:10:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.923 11:10:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.923 11:10:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.923 11:10:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.923 11:10:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.923 11:10:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:29.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:23:29.923 00:23:29.923 --- 10.0.0.2 ping statistics --- 00:23:29.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.923 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:29.923 11:10:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:29.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:29.923 00:23:29.923 --- 10.0.0.3 ping statistics --- 00:23:29.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.923 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:29.923 11:10:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:29.923 00:23:29.923 --- 10.0.0.1 ping statistics --- 00:23:29.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.923 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:29.923 11:10:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.923 11:10:58 -- nvmf/common.sh@422 -- # return 0 00:23:29.923 11:10:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:29.923 11:10:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.923 11:10:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:29.923 11:10:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:29.923 11:10:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.923 11:10:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:29.923 11:10:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:29.923 11:10:58 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:29.923 11:10:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:29.923 11:10:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:29.923 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:29.923 11:10:58 -- nvmf/common.sh@470 -- # nvmfpid=91720 00:23:29.923 11:10:58 -- nvmf/common.sh@471 -- # waitforlisten 91720 00:23:29.923 11:10:58 -- common/autotest_common.sh@817 -- # '[' -z 91720 ']' 00:23:29.923 11:10:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.923 11:10:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.923 11:10:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:29.923 11:10:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.923 11:10:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:29.923 11:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:29.923 [2024-04-18 11:10:58.516813] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:29.923 [2024-04-18 11:10:58.516967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.181 [2024-04-18 11:10:58.659881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.181 [2024-04-18 11:10:58.764602] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.181 [2024-04-18 11:10:58.764682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.181 [2024-04-18 11:10:58.764694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.181 [2024-04-18 11:10:58.764703] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.181 [2024-04-18 11:10:58.764710] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.181 [2024-04-18 11:10:58.765442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.181 [2024-04-18 11:10:58.765639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.181 [2024-04-18 11:10:58.765763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.181 [2024-04-18 11:10:58.765768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.111 11:10:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:31.111 11:10:59 -- common/autotest_common.sh@850 -- # return 0 00:23:31.111 11:10:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:31.111 11:10:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 11:10:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.111 11:10:59 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 [2024-04-18 11:10:59.501230] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 Malloc0 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 [2024-04-18 11:10:59.571363] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:31.111 test case1: single bdev can't be used in multiple subsystems 00:23:31.111 11:10:59 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@28 -- # nmic_status=0 00:23:31.111 11:10:59 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 [2024-04-18 11:10:59.595220] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:31.111 [2024-04-18 11:10:59.595255] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:31.111 [2024-04-18 11:10:59.595272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:31.111 2024/04/18 11:10:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:31.111 request: 00:23:31.111 { 00:23:31.111 "method": "nvmf_subsystem_add_ns", 00:23:31.111 "params": { 00:23:31.111 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.111 "namespace": { 00:23:31.111 "bdev_name": "Malloc0", 00:23:31.111 "no_auto_visible": false 00:23:31.111 } 00:23:31.111 } 00:23:31.111 } 00:23:31.111 Got JSON-RPC error response 00:23:31.111 GoRPCClient: error on JSON-RPC call 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@29 -- # nmic_status=1 00:23:31.111 11:10:59 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:31.111 Adding namespace failed - expected result. 00:23:31.111 11:10:59 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:31.111 test case2: host connect to nvmf target in multiple paths 00:23:31.111 11:10:59 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:31.111 11:10:59 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:31.111 11:10:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.111 11:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:31.111 [2024-04-18 11:10:59.607485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:31.111 11:10:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.111 11:10:59 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:31.369 11:10:59 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:31.369 11:10:59 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:31.370 11:10:59 -- common/autotest_common.sh@1184 -- # local i=0 00:23:31.370 11:10:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:31.370 11:10:59 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:31.370 11:10:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:33.304 11:11:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:33.304 11:11:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:33.562 11:11:01 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:33.562 11:11:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:33.562 11:11:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:33.562 11:11:01 -- common/autotest_common.sh@1194 -- # return 0 00:23:33.562 11:11:01 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:33.562 [global] 00:23:33.562 thread=1 00:23:33.562 invalidate=1 00:23:33.562 rw=write 00:23:33.562 time_based=1 00:23:33.562 runtime=1 00:23:33.562 ioengine=libaio 00:23:33.562 direct=1 00:23:33.562 bs=4096 00:23:33.562 iodepth=1 00:23:33.562 norandommap=0 00:23:33.562 numjobs=1 00:23:33.562 00:23:33.562 verify_dump=1 00:23:33.562 verify_backlog=512 00:23:33.562 verify_state_save=0 00:23:33.562 do_verify=1 00:23:33.562 verify=crc32c-intel 00:23:33.562 [job0] 00:23:33.562 filename=/dev/nvme0n1 00:23:33.562 Could not set queue depth (nvme0n1) 00:23:33.562 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:33.562 fio-3.35 00:23:33.562 Starting 1 thread 00:23:34.933 00:23:34.933 job0: (groupid=0, jobs=1): err= 0: pid=91831: Thu Apr 18 11:11:03 2024 00:23:34.933 read: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec) 00:23:34.933 slat (nsec): min=12433, max=49918, avg=15801.64, stdev=3397.00 00:23:34.933 clat (usec): min=133, max=334, avg=152.09, stdev= 9.88 00:23:34.933 lat (usec): min=146, max=374, avg=167.90, stdev=11.03 00:23:34.933 clat percentiles (usec): 00:23:34.933 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:23:34.933 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:23:34.933 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:23:34.933 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 206], 99.95th=[ 269], 00:23:34.933 | 99.99th=[ 334] 00:23:34.933 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:23:34.933 slat (usec): min=19, max=105, avg=22.77, stdev= 4.18 00:23:34.933 clat (usec): min=93, max=524, avg=107.33, stdev=11.54 00:23:34.933 lat (usec): min=112, max=557, avg=130.10, stdev=12.86 00:23:34.933 clat percentiles (usec): 00:23:34.933 | 1.00th=[ 95], 5.00th=[ 98], 10.00th=[ 99], 20.00th=[ 101], 00:23:34.933 | 30.00th=[ 103], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 108], 00:23:34.933 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 122], 00:23:34.933 | 99.00th=[ 131], 99.50th=[ 137], 99.90th=[ 225], 99.95th=[ 388], 00:23:34.933 | 99.99th=[ 523] 00:23:34.933 bw ( KiB/s): min=14120, max=14120, per=98.59%, avg=14120.00, stdev= 0.00, samples=1 00:23:34.933 iops : min= 3530, max= 3530, avg=3530.00, stdev= 0.00, samples=1 00:23:34.933 lat (usec) : 100=7.60%, 250=92.34%, 500=0.04%, 750=0.01% 00:23:34.933 cpu : usr=3.60%, sys=8.60%, ctx=6695, majf=0, minf=2 00:23:34.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.933 issued rwts: total=3109,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:34.933 00:23:34.933 Run status group 0 (all jobs): 00:23:34.933 READ: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.1MiB (12.7MB), run=1001-1001msec 00:23:34.933 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:23:34.933 00:23:34.933 Disk stats (read/write): 00:23:34.933 nvme0n1: ios=2933/3072, merge=0/0, ticks=449/354, in_queue=803, util=91.06% 00:23:34.933 11:11:03 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:34.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:34.933 11:11:03 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:34.933 11:11:03 -- common/autotest_common.sh@1205 -- # local i=0 00:23:34.933 11:11:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:34.933 11:11:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:34.933 11:11:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:34.933 11:11:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:34.933 11:11:03 -- common/autotest_common.sh@1217 -- # return 0 00:23:34.933 11:11:03 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:34.933 11:11:03 -- target/nmic.sh@53 -- # nvmftestfini 00:23:34.933 11:11:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:34.933 11:11:03 -- nvmf/common.sh@117 -- # sync 00:23:34.933 11:11:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.933 11:11:03 -- nvmf/common.sh@120 -- # set +e 00:23:34.933 11:11:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.933 11:11:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.933 rmmod nvme_tcp 00:23:34.933 rmmod nvme_fabrics 00:23:34.933 rmmod nvme_keyring 00:23:34.933 11:11:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.933 11:11:03 -- nvmf/common.sh@124 -- # set -e 00:23:34.934 11:11:03 -- nvmf/common.sh@125 -- # return 0 00:23:34.934 11:11:03 -- nvmf/common.sh@478 -- # '[' -n 91720 ']' 00:23:34.934 11:11:03 -- nvmf/common.sh@479 -- # killprocess 91720 00:23:34.934 11:11:03 -- common/autotest_common.sh@936 -- # '[' -z 91720 ']' 00:23:34.934 11:11:03 -- common/autotest_common.sh@940 -- # kill -0 91720 00:23:34.934 11:11:03 -- common/autotest_common.sh@941 -- # uname 00:23:34.934 11:11:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:34.934 11:11:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91720 00:23:34.934 11:11:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:34.934 11:11:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:34.934 killing process with pid 91720 00:23:34.934 11:11:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91720' 00:23:34.934 11:11:03 -- common/autotest_common.sh@955 -- # kill 91720 00:23:34.934 11:11:03 -- common/autotest_common.sh@960 -- # wait 91720 00:23:35.191 11:11:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:35.191 11:11:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:35.191 11:11:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:35.191 11:11:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.191 11:11:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.191 11:11:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.191 11:11:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.191 11:11:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.497 11:11:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:35.497 00:23:35.497 real 0m5.840s 00:23:35.497 user 0m19.685s 00:23:35.497 sys 0m1.401s 00:23:35.497 11:11:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:35.497 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 ************************************ 00:23:35.497 END TEST nvmf_nmic 00:23:35.497 ************************************ 00:23:35.497 11:11:03 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:35.497 11:11:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:35.497 11:11:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:35.497 11:11:03 -- common/autotest_common.sh@10 -- # set +x 00:23:35.497 ************************************ 00:23:35.497 START TEST nvmf_fio_target 00:23:35.497 ************************************ 00:23:35.497 11:11:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:35.497 * Looking for test storage... 00:23:35.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:35.497 11:11:04 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:35.497 11:11:04 -- nvmf/common.sh@7 -- # uname -s 00:23:35.497 11:11:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.497 11:11:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.497 11:11:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.497 11:11:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.497 11:11:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.497 11:11:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.497 11:11:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.497 11:11:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.497 11:11:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.497 11:11:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.497 11:11:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:35.497 11:11:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:35.497 11:11:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.497 11:11:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.497 11:11:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:35.497 11:11:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.497 11:11:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:35.497 11:11:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.497 11:11:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.497 11:11:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.497 11:11:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.498 11:11:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.498 11:11:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.498 11:11:04 -- paths/export.sh@5 -- # export PATH 00:23:35.498 11:11:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.498 11:11:04 -- nvmf/common.sh@47 -- # : 0 00:23:35.498 11:11:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.498 11:11:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.498 11:11:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.498 11:11:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.498 11:11:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.498 11:11:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.498 11:11:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.498 11:11:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.498 11:11:04 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:35.498 11:11:04 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:35.498 11:11:04 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:35.498 11:11:04 -- target/fio.sh@16 -- # nvmftestinit 00:23:35.498 11:11:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:35.498 11:11:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.498 11:11:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:35.498 11:11:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:35.498 11:11:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:35.498 11:11:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.498 11:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.498 11:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.498 11:11:04 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:35.498 11:11:04 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:35.498 11:11:04 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:35.498 11:11:04 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:35.498 11:11:04 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:35.498 11:11:04 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:35.498 11:11:04 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.498 11:11:04 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.498 11:11:04 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:35.498 11:11:04 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:35.498 11:11:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:35.498 11:11:04 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:35.498 11:11:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:35.498 11:11:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.498 11:11:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:35.498 11:11:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:35.498 11:11:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:35.498 11:11:04 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:35.498 11:11:04 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:35.498 11:11:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:35.498 Cannot find device "nvmf_tgt_br" 00:23:35.498 11:11:04 -- nvmf/common.sh@155 -- # true 00:23:35.498 11:11:04 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:35.498 Cannot find device "nvmf_tgt_br2" 00:23:35.498 11:11:04 -- nvmf/common.sh@156 -- # true 00:23:35.498 11:11:04 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:35.498 11:11:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:35.498 Cannot find device "nvmf_tgt_br" 00:23:35.498 11:11:04 -- nvmf/common.sh@158 -- # true 00:23:35.498 11:11:04 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:35.498 Cannot find device "nvmf_tgt_br2" 00:23:35.498 11:11:04 -- nvmf/common.sh@159 -- # true 00:23:35.498 11:11:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:35.757 11:11:04 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:35.757 11:11:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:35.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.757 11:11:04 -- nvmf/common.sh@162 -- # true 00:23:35.757 11:11:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:35.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.757 11:11:04 -- nvmf/common.sh@163 -- # true 00:23:35.757 11:11:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:35.757 11:11:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:35.757 11:11:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:35.757 11:11:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:35.757 11:11:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:35.757 11:11:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:35.757 11:11:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:35.757 11:11:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:35.757 11:11:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:35.757 11:11:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:35.757 11:11:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:35.757 11:11:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:35.757 11:11:04 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:35.757 11:11:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:35.757 11:11:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:35.757 11:11:04 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:35.757 11:11:04 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:35.757 11:11:04 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:35.758 11:11:04 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:35.758 11:11:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:35.758 11:11:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:35.758 11:11:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:35.758 11:11:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:35.758 11:11:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:35.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:23:35.758 00:23:35.758 --- 10.0.0.2 ping statistics --- 00:23:35.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.758 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:35.758 11:11:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:35.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:35.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:35.758 00:23:35.758 --- 10.0.0.3 ping statistics --- 00:23:35.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.758 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:35.758 11:11:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:35.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:35.758 00:23:35.758 --- 10.0.0.1 ping statistics --- 00:23:35.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.758 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:35.758 11:11:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.758 11:11:04 -- nvmf/common.sh@422 -- # return 0 00:23:35.758 11:11:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:35.758 11:11:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.758 11:11:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:35.758 11:11:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:35.758 11:11:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.758 11:11:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:35.758 11:11:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:36.016 11:11:04 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:36.016 11:11:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:36.016 11:11:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:36.016 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:36.016 11:11:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:36.016 11:11:04 -- nvmf/common.sh@470 -- # nvmfpid=92014 00:23:36.016 11:11:04 -- nvmf/common.sh@471 -- # waitforlisten 92014 00:23:36.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.016 11:11:04 -- common/autotest_common.sh@817 -- # '[' -z 92014 ']' 00:23:36.016 11:11:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.016 11:11:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:36.016 11:11:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.016 11:11:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:36.016 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:36.016 [2024-04-18 11:11:04.466049] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:36.016 [2024-04-18 11:11:04.466269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.016 [2024-04-18 11:11:04.604248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:36.274 [2024-04-18 11:11:04.683668] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.274 [2024-04-18 11:11:04.683741] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.274 [2024-04-18 11:11:04.683755] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.274 [2024-04-18 11:11:04.683766] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.274 [2024-04-18 11:11:04.683775] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.274 [2024-04-18 11:11:04.684196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.274 [2024-04-18 11:11:04.684258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.274 [2024-04-18 11:11:04.684689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.274 [2024-04-18 11:11:04.684738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.274 11:11:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:36.274 11:11:04 -- common/autotest_common.sh@850 -- # return 0 00:23:36.274 11:11:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:36.274 11:11:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:36.274 11:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:36.274 11:11:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.274 11:11:04 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:36.536 [2024-04-18 11:11:05.103255] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.536 11:11:05 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:36.795 11:11:05 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:36.795 11:11:05 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:37.362 11:11:05 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:37.362 11:11:05 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:37.362 11:11:05 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:37.362 11:11:05 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:37.931 11:11:06 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:37.931 11:11:06 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:37.931 11:11:06 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:38.499 11:11:06 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:38.499 11:11:06 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:38.499 11:11:07 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:38.499 11:11:07 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:38.757 11:11:07 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:38.757 11:11:07 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:39.016 11:11:07 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:39.273 11:11:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:39.273 11:11:07 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.841 11:11:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:39.841 11:11:08 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:40.099 11:11:08 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.357 [2024-04-18 11:11:08.740699] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.357 11:11:08 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:40.615 11:11:09 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:40.880 11:11:09 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:40.880 11:11:09 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:40.880 11:11:09 -- common/autotest_common.sh@1184 -- # local i=0 00:23:40.880 11:11:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:40.880 11:11:09 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:23:40.880 11:11:09 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:23:40.880 11:11:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:43.430 11:11:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:43.430 11:11:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:43.430 11:11:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:43.430 11:11:11 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:23:43.430 11:11:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:43.430 11:11:11 -- common/autotest_common.sh@1194 -- # return 0 00:23:43.431 11:11:11 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:43.431 [global] 00:23:43.431 thread=1 00:23:43.431 invalidate=1 00:23:43.431 rw=write 00:23:43.431 time_based=1 00:23:43.431 runtime=1 00:23:43.431 ioengine=libaio 00:23:43.431 direct=1 00:23:43.431 bs=4096 00:23:43.431 iodepth=1 00:23:43.431 norandommap=0 00:23:43.431 numjobs=1 00:23:43.431 00:23:43.431 verify_dump=1 00:23:43.431 verify_backlog=512 00:23:43.431 verify_state_save=0 00:23:43.431 do_verify=1 00:23:43.431 verify=crc32c-intel 00:23:43.431 [job0] 00:23:43.431 filename=/dev/nvme0n1 00:23:43.431 [job1] 00:23:43.431 filename=/dev/nvme0n2 00:23:43.431 [job2] 00:23:43.431 filename=/dev/nvme0n3 00:23:43.431 [job3] 00:23:43.431 filename=/dev/nvme0n4 00:23:43.431 Could not set queue depth (nvme0n1) 00:23:43.431 Could not set queue depth (nvme0n2) 00:23:43.431 Could not set queue depth (nvme0n3) 00:23:43.431 Could not set queue depth (nvme0n4) 00:23:43.431 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:43.431 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:43.431 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:43.431 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:43.431 fio-3.35 00:23:43.431 Starting 4 threads 00:23:44.363 00:23:44.363 job0: (groupid=0, jobs=1): err= 0: pid=92296: Thu Apr 18 11:11:12 2024 00:23:44.363 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:23:44.363 slat (nsec): min=12018, max=43672, avg=16786.39, stdev=3151.80 00:23:44.363 clat (usec): min=275, max=708, avg=468.62, stdev=51.78 00:23:44.363 lat (usec): min=289, max=742, avg=485.40, stdev=51.78 00:23:44.363 clat percentiles (usec): 00:23:44.363 | 1.00th=[ 302], 5.00th=[ 404], 10.00th=[ 420], 20.00th=[ 437], 00:23:44.363 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 474], 00:23:44.363 | 70.00th=[ 486], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 553], 00:23:44.363 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 709], 00:23:44.363 | 99.99th=[ 709] 00:23:44.363 write: IOPS=1400, BW=5602KiB/s (5737kB/s)(5608KiB/1001msec); 0 zone resets 00:23:44.363 slat (usec): min=12, max=153, avg=32.07, stdev=14.57 00:23:44.363 clat (usec): min=116, max=3720, avg=322.78, stdev=191.50 00:23:44.363 lat (usec): min=147, max=3777, avg=354.85, stdev=188.70 00:23:44.363 clat percentiles (usec): 00:23:44.363 | 1.00th=[ 145], 5.00th=[ 165], 10.00th=[ 178], 20.00th=[ 204], 00:23:44.363 | 30.00th=[ 237], 40.00th=[ 310], 50.00th=[ 338], 60.00th=[ 359], 00:23:44.363 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 445], 00:23:44.363 | 99.00th=[ 553], 99.50th=[ 603], 99.90th=[ 3687], 99.95th=[ 3720], 00:23:44.363 | 99.99th=[ 3720] 00:23:44.363 bw ( KiB/s): min= 6216, max= 6216, per=20.49%, avg=6216.00, stdev= 0.00, samples=1 00:23:44.363 iops : min= 1554, max= 1554, avg=1554.00, stdev= 0.00, samples=1 00:23:44.363 lat (usec) : 250=18.76%, 500=71.60%, 750=9.36% 00:23:44.363 lat (msec) : 2=0.08%, 4=0.21% 00:23:44.363 cpu : usr=2.00%, sys=4.00%, ctx=2430, majf=0, minf=11 00:23:44.363 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.363 issued rwts: total=1024,1402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.363 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.363 job1: (groupid=0, jobs=1): err= 0: pid=92297: Thu Apr 18 11:11:12 2024 00:23:44.363 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:23:44.363 slat (nsec): min=12922, max=63839, avg=17368.59, stdev=4131.55 00:23:44.363 clat (usec): min=154, max=2085, avg=222.05, stdev=52.21 00:23:44.364 lat (usec): min=169, max=2100, avg=239.42, stdev=52.28 00:23:44.364 clat percentiles (usec): 00:23:44.364 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 196], 00:23:44.364 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 227], 00:23:44.364 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 269], 00:23:44.364 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 429], 99.95th=[ 725], 00:23:44.364 | 99.99th=[ 2089] 00:23:44.364 write: IOPS=2431, BW=9726KiB/s (9960kB/s)(9736KiB/1001msec); 0 zone resets 00:23:44.364 slat (usec): min=18, max=123, avg=27.36, stdev= 8.98 00:23:44.364 clat (usec): min=61, max=410, avg=178.38, stdev=31.45 00:23:44.364 lat (usec): min=132, max=446, avg=205.74, stdev=34.66 00:23:44.364 clat percentiles (usec): 00:23:44.364 | 1.00th=[ 121], 5.00th=[ 131], 10.00th=[ 141], 20.00th=[ 151], 00:23:44.364 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 186], 00:23:44.364 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 235], 00:23:44.364 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 367], 00:23:44.364 | 99.99th=[ 412] 00:23:44.364 bw ( KiB/s): min= 9280, max= 9280, per=30.60%, avg=9280.00, stdev= 0.00, samples=1 00:23:44.364 iops : min= 2320, max= 2320, avg=2320.00, stdev= 0.00, samples=1 00:23:44.364 lat (usec) : 100=0.02%, 250=92.12%, 500=7.81%, 750=0.02% 00:23:44.364 lat (msec) : 4=0.02% 00:23:44.364 cpu : usr=2.20%, sys=7.40%, ctx=4483, majf=0, minf=9 00:23:44.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.364 issued rwts: total=2048,2434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.364 job2: (groupid=0, jobs=1): err= 0: pid=92298: Thu Apr 18 11:11:12 2024 00:23:44.364 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:23:44.364 slat (nsec): min=11785, max=40971, avg=17805.41, stdev=3402.34 00:23:44.364 clat (usec): min=277, max=689, avg=468.20, stdev=51.53 00:23:44.364 lat (usec): min=291, max=707, avg=486.00, stdev=51.68 00:23:44.364 clat percentiles (usec): 00:23:44.364 | 1.00th=[ 318], 5.00th=[ 400], 10.00th=[ 416], 20.00th=[ 433], 00:23:44.364 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 478], 00:23:44.364 | 70.00th=[ 490], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 562], 00:23:44.364 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 676], 99.95th=[ 693], 00:23:44.364 | 99.99th=[ 693] 00:23:44.364 write: IOPS=1456, BW=5826KiB/s (5966kB/s)(5832KiB/1001msec); 0 zone resets 00:23:44.364 slat (usec): min=12, max=131, avg=27.35, stdev= 8.27 00:23:44.364 clat (usec): min=161, max=2134, avg=313.85, stdev=98.05 00:23:44.364 lat (usec): min=193, max=2161, avg=341.20, stdev=96.02 00:23:44.364 clat percentiles (usec): 00:23:44.364 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 217], 00:23:44.364 | 30.00th=[ 237], 40.00th=[ 293], 50.00th=[ 330], 60.00th=[ 351], 00:23:44.364 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 441], 00:23:44.364 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 603], 99.95th=[ 2147], 00:23:44.364 | 99.99th=[ 2147] 00:23:44.364 bw ( KiB/s): min= 6672, max= 6672, per=22.00%, avg=6672.00, stdev= 0.00, samples=1 00:23:44.364 iops : min= 1668, max= 1668, avg=1668.00, stdev= 0.00, samples=1 00:23:44.364 lat (usec) : 250=19.38%, 500=70.63%, 750=9.95% 00:23:44.364 lat (msec) : 4=0.04% 00:23:44.364 cpu : usr=1.00%, sys=4.50%, ctx=2485, majf=0, minf=12 00:23:44.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.364 issued rwts: total=1024,1458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.364 job3: (groupid=0, jobs=1): err= 0: pid=92299: Thu Apr 18 11:11:12 2024 00:23:44.364 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:23:44.364 slat (nsec): min=13028, max=41542, avg=15951.95, stdev=2561.84 00:23:44.364 clat (usec): min=163, max=434, avg=226.53, stdev=27.05 00:23:44.364 lat (usec): min=177, max=449, avg=242.49, stdev=27.31 00:23:44.364 clat percentiles (usec): 00:23:44.364 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 204], 00:23:44.364 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:23:44.364 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 273], 00:23:44.364 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 318], 99.95th=[ 334], 00:23:44.364 | 99.99th=[ 437] 00:23:44.364 write: IOPS=2293, BW=9175KiB/s (9395kB/s)(9184KiB/1001msec); 0 zone resets 00:23:44.364 slat (usec): min=19, max=123, avg=27.26, stdev= 9.09 00:23:44.364 clat (usec): min=120, max=627, avg=188.36, stdev=29.89 00:23:44.364 lat (usec): min=142, max=665, avg=215.62, stdev=33.15 00:23:44.364 clat percentiles (usec): 00:23:44.364 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:23:44.364 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 196], 00:23:44.364 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 239], 00:23:44.364 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 392], 00:23:44.364 | 99.99th=[ 627] 00:23:44.364 bw ( KiB/s): min= 8976, max= 8976, per=29.59%, avg=8976.00, stdev= 0.00, samples=1 00:23:44.364 iops : min= 2244, max= 2244, avg=2244.00, stdev= 0.00, samples=1 00:23:44.364 lat (usec) : 250=90.26%, 500=9.71%, 750=0.02% 00:23:44.364 cpu : usr=1.90%, sys=6.90%, ctx=4345, majf=0, minf=5 00:23:44.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.364 issued rwts: total=2048,2296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.364 00:23:44.364 Run status group 0 (all jobs): 00:23:44.364 READ: bw=24.0MiB/s (25.1MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:23:44.364 WRITE: bw=29.6MiB/s (31.1MB/s), 5602KiB/s-9726KiB/s (5737kB/s-9960kB/s), io=29.6MiB (31.1MB), run=1001-1001msec 00:23:44.364 00:23:44.364 Disk stats (read/write): 00:23:44.364 nvme0n1: ios=1074/1028, merge=0/0, ticks=507/316, in_queue=823, util=86.67% 00:23:44.364 nvme0n2: ios=1804/2048, merge=0/0, ticks=437/387, in_queue=824, util=88.55% 00:23:44.364 nvme0n3: ios=1030/1084, merge=0/0, ticks=484/334, in_queue=818, util=89.09% 00:23:44.364 nvme0n4: ios=1662/2048, merge=0/0, ticks=388/405, in_queue=793, util=89.55% 00:23:44.364 11:11:12 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:44.364 [global] 00:23:44.364 thread=1 00:23:44.364 invalidate=1 00:23:44.364 rw=randwrite 00:23:44.364 time_based=1 00:23:44.364 runtime=1 00:23:44.364 ioengine=libaio 00:23:44.364 direct=1 00:23:44.364 bs=4096 00:23:44.364 iodepth=1 00:23:44.364 norandommap=0 00:23:44.364 numjobs=1 00:23:44.364 00:23:44.364 verify_dump=1 00:23:44.364 verify_backlog=512 00:23:44.364 verify_state_save=0 00:23:44.364 do_verify=1 00:23:44.364 verify=crc32c-intel 00:23:44.364 [job0] 00:23:44.364 filename=/dev/nvme0n1 00:23:44.364 [job1] 00:23:44.364 filename=/dev/nvme0n2 00:23:44.364 [job2] 00:23:44.364 filename=/dev/nvme0n3 00:23:44.364 [job3] 00:23:44.364 filename=/dev/nvme0n4 00:23:44.364 Could not set queue depth (nvme0n1) 00:23:44.364 Could not set queue depth (nvme0n2) 00:23:44.364 Could not set queue depth (nvme0n3) 00:23:44.364 Could not set queue depth (nvme0n4) 00:23:44.622 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:44.622 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:44.622 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:44.622 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:44.622 fio-3.35 00:23:44.622 Starting 4 threads 00:23:46.018 00:23:46.018 job0: (groupid=0, jobs=1): err= 0: pid=92352: Thu Apr 18 11:11:14 2024 00:23:46.018 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:23:46.018 slat (nsec): min=7883, max=60530, avg=15379.74, stdev=5999.82 00:23:46.018 clat (usec): min=183, max=862, avg=329.67, stdev=33.16 00:23:46.018 lat (usec): min=194, max=881, avg=345.05, stdev=34.38 00:23:46.018 clat percentiles (usec): 00:23:46.018 | 1.00th=[ 212], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:23:46.018 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:23:46.018 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 375], 00:23:46.018 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 474], 99.95th=[ 865], 00:23:46.018 | 99.99th=[ 865] 00:23:46.018 write: IOPS=1700, BW=6801KiB/s (6964kB/s)(6808KiB/1001msec); 0 zone resets 00:23:46.018 slat (nsec): min=11076, max=71218, avg=21932.56, stdev=5192.00 00:23:46.018 clat (usec): min=114, max=740, avg=250.92, stdev=45.61 00:23:46.018 lat (usec): min=135, max=773, avg=272.86, stdev=45.31 00:23:46.018 clat percentiles (usec): 00:23:46.018 | 1.00th=[ 126], 5.00th=[ 155], 10.00th=[ 215], 20.00th=[ 231], 00:23:46.018 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:23:46.018 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 330], 00:23:46.018 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 594], 99.95th=[ 742], 00:23:46.018 | 99.99th=[ 742] 00:23:46.018 bw ( KiB/s): min= 8192, max= 8192, per=24.05%, avg=8192.00, stdev= 0.00, samples=1 00:23:46.018 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:46.018 lat (usec) : 250=26.59%, 500=73.32%, 750=0.06%, 1000=0.03% 00:23:46.018 cpu : usr=1.00%, sys=5.00%, ctx=3339, majf=0, minf=13 00:23:46.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.018 issued rwts: total=1536,1702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:46.018 job1: (groupid=0, jobs=1): err= 0: pid=92353: Thu Apr 18 11:11:14 2024 00:23:46.018 read: IOPS=2447, BW=9790KiB/s (10.0MB/s)(9800KiB/1001msec) 00:23:46.018 slat (nsec): min=7890, max=61754, avg=15609.82, stdev=5244.49 00:23:46.018 clat (usec): min=152, max=4046, avg=216.13, stdev=120.04 00:23:46.018 lat (usec): min=167, max=4081, avg=231.74, stdev=121.17 00:23:46.018 clat percentiles (usec): 00:23:46.018 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:23:46.018 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:23:46.019 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 351], 95.00th=[ 388], 00:23:46.019 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 1123], 99.95th=[ 2704], 00:23:46.019 | 99.99th=[ 4047] 00:23:46.019 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:23:46.019 slat (usec): min=15, max=104, avg=23.04, stdev= 5.98 00:23:46.019 clat (usec): min=111, max=1528, avg=142.39, stdev=31.46 00:23:46.019 lat (usec): min=133, max=1549, avg=165.42, stdev=32.48 00:23:46.019 clat percentiles (usec): 00:23:46.019 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:23:46.019 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:23:46.019 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:23:46.019 | 99.00th=[ 186], 99.50th=[ 229], 99.90th=[ 310], 99.95th=[ 326], 00:23:46.019 | 99.99th=[ 1532] 00:23:46.019 bw ( KiB/s): min=12288, max=12288, per=36.07%, avg=12288.00, stdev= 0.00, samples=1 00:23:46.019 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:23:46.019 lat (usec) : 250=93.97%, 500=5.35%, 750=0.58%, 1000=0.02% 00:23:46.019 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02% 00:23:46.019 cpu : usr=2.60%, sys=6.60%, ctx=5096, majf=0, minf=19 00:23:46.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.019 issued rwts: total=2450,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:46.019 job2: (groupid=0, jobs=1): err= 0: pid=92355: Thu Apr 18 11:11:14 2024 00:23:46.019 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:23:46.019 slat (nsec): min=12956, max=41123, avg=14779.12, stdev=1974.19 00:23:46.019 clat (usec): min=159, max=293, avg=198.37, stdev=14.88 00:23:46.019 lat (usec): min=173, max=307, avg=213.15, stdev=15.02 00:23:46.019 clat percentiles (usec): 00:23:46.019 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:23:46.019 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:23:46.019 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 225], 00:23:46.019 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 269], 00:23:46.019 | 99.99th=[ 293] 00:23:46.019 write: IOPS=2558, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:23:46.019 slat (usec): min=17, max=262, avg=21.85, stdev= 8.01 00:23:46.019 clat (usec): min=3, max=430, avg=152.20, stdev=16.57 00:23:46.019 lat (usec): min=141, max=486, avg=174.04, stdev=17.76 00:23:46.019 clat percentiles (usec): 00:23:46.019 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:23:46.019 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:23:46.019 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:23:46.019 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 285], 99.95th=[ 429], 00:23:46.019 | 99.99th=[ 433] 00:23:46.019 bw ( KiB/s): min=12288, max=12288, per=36.07%, avg=12288.00, stdev= 0.00, samples=1 00:23:46.019 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:23:46.019 lat (usec) : 4=0.02%, 100=0.06%, 250=99.67%, 500=0.25% 00:23:46.019 cpu : usr=2.00%, sys=6.70%, ctx=5137, majf=0, minf=3 00:23:46.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.019 issued rwts: total=2560,2561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:46.019 job3: (groupid=0, jobs=1): err= 0: pid=92359: Thu Apr 18 11:11:14 2024 00:23:46.019 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:23:46.019 slat (nsec): min=9571, max=36564, avg=14391.45, stdev=2435.78 00:23:46.019 clat (usec): min=200, max=875, avg=330.41, stdev=32.63 00:23:46.019 lat (usec): min=213, max=889, avg=344.80, stdev=32.78 00:23:46.019 clat percentiles (usec): 00:23:46.019 | 1.00th=[ 227], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:23:46.019 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:23:46.019 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 379], 00:23:46.019 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 482], 99.95th=[ 873], 00:23:46.019 | 99.99th=[ 873] 00:23:46.019 write: IOPS=1700, BW=6801KiB/s (6964kB/s)(6808KiB/1001msec); 0 zone resets 00:23:46.019 slat (usec): min=11, max=103, avg=21.91, stdev= 5.62 00:23:46.019 clat (usec): min=125, max=821, avg=251.18, stdev=41.57 00:23:46.019 lat (usec): min=146, max=849, avg=273.09, stdev=41.56 00:23:46.019 clat percentiles (usec): 00:23:46.019 | 1.00th=[ 139], 5.00th=[ 186], 10.00th=[ 217], 20.00th=[ 233], 00:23:46.019 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:23:46.019 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 318], 00:23:46.019 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 668], 99.95th=[ 824], 00:23:46.019 | 99.99th=[ 824] 00:23:46.019 bw ( KiB/s): min= 8192, max= 8192, per=24.05%, avg=8192.00, stdev= 0.00, samples=1 00:23:46.019 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:46.019 lat (usec) : 250=27.27%, 500=72.64%, 750=0.03%, 1000=0.06% 00:23:46.019 cpu : usr=1.20%, sys=4.80%, ctx=3251, majf=0, minf=10 00:23:46.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.019 issued rwts: total=1536,1702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:46.019 00:23:46.019 Run status group 0 (all jobs): 00:23:46.019 READ: bw=31.5MiB/s (33.1MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.6MiB (33.1MB), run=1001-1001msec 00:23:46.019 WRITE: bw=33.3MiB/s (34.9MB/s), 6801KiB/s-9.99MiB/s (6964kB/s-10.5MB/s), io=33.3MiB (34.9MB), run=1001-1001msec 00:23:46.019 00:23:46.019 Disk stats (read/write): 00:23:46.019 nvme0n1: ios=1343/1536, merge=0/0, ticks=437/397, in_queue=834, util=88.18% 00:23:46.019 nvme0n2: ios=2182/2560, merge=0/0, ticks=442/395, in_queue=837, util=90.61% 00:23:46.019 nvme0n3: ios=2069/2381, merge=0/0, ticks=451/388, in_queue=839, util=89.50% 00:23:46.019 nvme0n4: ios=1294/1536, merge=0/0, ticks=425/392, in_queue=817, util=89.73% 00:23:46.019 11:11:14 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:46.019 [global] 00:23:46.019 thread=1 00:23:46.019 invalidate=1 00:23:46.019 rw=write 00:23:46.019 time_based=1 00:23:46.019 runtime=1 00:23:46.019 ioengine=libaio 00:23:46.019 direct=1 00:23:46.019 bs=4096 00:23:46.019 iodepth=128 00:23:46.019 norandommap=0 00:23:46.019 numjobs=1 00:23:46.019 00:23:46.019 verify_dump=1 00:23:46.019 verify_backlog=512 00:23:46.019 verify_state_save=0 00:23:46.019 do_verify=1 00:23:46.019 verify=crc32c-intel 00:23:46.019 [job0] 00:23:46.019 filename=/dev/nvme0n1 00:23:46.019 [job1] 00:23:46.019 filename=/dev/nvme0n2 00:23:46.019 [job2] 00:23:46.019 filename=/dev/nvme0n3 00:23:46.019 [job3] 00:23:46.019 filename=/dev/nvme0n4 00:23:46.019 Could not set queue depth (nvme0n1) 00:23:46.019 Could not set queue depth (nvme0n2) 00:23:46.019 Could not set queue depth (nvme0n3) 00:23:46.019 Could not set queue depth (nvme0n4) 00:23:46.019 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.019 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.019 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.019 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.019 fio-3.35 00:23:46.019 Starting 4 threads 00:23:47.395 00:23:47.395 job0: (groupid=0, jobs=1): err= 0: pid=92420: Thu Apr 18 11:11:15 2024 00:23:47.395 read: IOPS=2931, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1003msec) 00:23:47.395 slat (usec): min=3, max=5595, avg=165.00, stdev=609.63 00:23:47.395 clat (usec): min=960, max=25389, avg=21103.01, stdev=2596.30 00:23:47.395 lat (usec): min=4001, max=26554, avg=21268.01, stdev=2566.69 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[ 5866], 5.00th=[17695], 10.00th=[19006], 20.00th=[20055], 00:23:47.395 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21627], 60.00th=[22152], 00:23:47.395 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23200], 95.00th=[23725], 00:23:47.395 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:23:47.395 | 99.99th=[25297] 00:23:47.395 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:23:47.395 slat (usec): min=8, max=5585, avg=159.96, stdev=715.28 00:23:47.395 clat (usec): min=14996, max=26258, avg=20908.45, stdev=1498.88 00:23:47.395 lat (usec): min=15999, max=26276, avg=21068.41, stdev=1373.81 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[16319], 5.00th=[17695], 10.00th=[19006], 20.00th=[20055], 00:23:47.395 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 00:23:47.395 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22414], 95.00th=[22938], 00:23:47.395 | 99.00th=[24511], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:23:47.395 | 99.99th=[26346] 00:23:47.395 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=2 00:23:47.395 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:23:47.395 lat (usec) : 1000=0.02% 00:23:47.395 lat (msec) : 10=0.62%, 20=18.45%, 50=80.92% 00:23:47.395 cpu : usr=3.29%, sys=7.78%, ctx=827, majf=0, minf=11 00:23:47.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:47.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.395 issued rwts: total=2940,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.395 job1: (groupid=0, jobs=1): err= 0: pid=92421: Thu Apr 18 11:11:15 2024 00:23:47.395 read: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1003msec) 00:23:47.395 slat (usec): min=6, max=6718, avg=166.68, stdev=729.38 00:23:47.395 clat (usec): min=837, max=29060, avg=21699.62, stdev=2653.11 00:23:47.395 lat (usec): min=3558, max=29086, avg=21866.31, stdev=2559.67 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[ 6128], 5.00th=[18744], 10.00th=[20317], 20.00th=[21365], 00:23:47.395 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22152], 00:23:47.395 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23462], 95.00th=[24773], 00:23:47.395 | 99.00th=[27657], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:23:47.395 | 99.99th=[28967] 00:23:47.395 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:23:47.395 slat (usec): min=12, max=7069, avg=161.15, stdev=741.81 00:23:47.395 clat (usec): min=13380, max=23734, avg=20737.71, stdev=1478.18 00:23:47.395 lat (usec): min=15404, max=25581, avg=20898.86, stdev=1301.88 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[15664], 5.00th=[17957], 10.00th=[18482], 20.00th=[19792], 00:23:47.395 | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], 00:23:47.395 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[23200], 00:23:47.395 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:23:47.395 | 99.99th=[23725] 00:23:47.395 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=2 00:23:47.395 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:23:47.395 lat (usec) : 1000=0.02% 00:23:47.395 lat (msec) : 4=0.18%, 10=0.54%, 20=16.51%, 50=82.75% 00:23:47.395 cpu : usr=2.40%, sys=10.38%, ctx=234, majf=0, minf=5 00:23:47.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:47.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.395 issued rwts: total=2881,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.395 job2: (groupid=0, jobs=1): err= 0: pid=92422: Thu Apr 18 11:11:15 2024 00:23:47.395 read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1004msec) 00:23:47.395 slat (usec): min=8, max=5307, avg=169.36, stdev=717.37 00:23:47.395 clat (usec): min=1702, max=26280, avg=21315.72, stdev=2655.07 00:23:47.395 lat (usec): min=4607, max=26473, avg=21485.08, stdev=2577.45 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[ 7046], 5.00th=[17433], 10.00th=[19006], 20.00th=[20317], 00:23:47.395 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:23:47.395 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23987], 00:23:47.395 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:23:47.395 | 99.99th=[26346] 00:23:47.395 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:23:47.395 slat (usec): min=14, max=5557, avg=155.93, stdev=714.86 00:23:47.395 clat (usec): min=13293, max=25684, avg=20954.63, stdev=1877.86 00:23:47.395 lat (usec): min=14273, max=25707, avg=21110.56, stdev=1758.20 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[16057], 5.00th=[17171], 10.00th=[18482], 20.00th=[19792], 00:23:47.395 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], 00:23:47.395 | 70.00th=[21627], 80.00th=[21890], 90.00th=[23462], 95.00th=[24249], 00:23:47.395 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:23:47.395 | 99.99th=[25560] 00:23:47.395 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=2 00:23:47.395 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:23:47.395 lat (msec) : 2=0.02%, 10=0.53%, 20=17.48%, 50=81.97% 00:23:47.395 cpu : usr=3.09%, sys=10.07%, ctx=267, majf=0, minf=10 00:23:47.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:47.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.395 issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.395 job3: (groupid=0, jobs=1): err= 0: pid=92423: Thu Apr 18 11:11:15 2024 00:23:47.395 read: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1004msec) 00:23:47.395 slat (usec): min=3, max=5805, avg=166.03, stdev=647.91 00:23:47.395 clat (usec): min=774, max=27552, avg=21049.83, stdev=2819.91 00:23:47.395 lat (usec): min=3588, max=27572, avg=21215.86, stdev=2785.39 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[ 5145], 5.00th=[17433], 10.00th=[18744], 20.00th=[19792], 00:23:47.395 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:23:47.395 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23462], 95.00th=[23987], 00:23:47.395 | 99.00th=[25297], 99.50th=[26346], 99.90th=[26608], 99.95th=[27395], 00:23:47.395 | 99.99th=[27657] 00:23:47.395 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:23:47.395 slat (usec): min=6, max=5541, avg=159.21, stdev=706.14 00:23:47.395 clat (usec): min=14890, max=26104, avg=20944.93, stdev=1629.80 00:23:47.395 lat (usec): min=15776, max=26269, avg=21104.14, stdev=1499.98 00:23:47.395 clat percentiles (usec): 00:23:47.395 | 1.00th=[16188], 5.00th=[17695], 10.00th=[18482], 20.00th=[20055], 00:23:47.396 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:23:47.396 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22676], 95.00th=[23462], 00:23:47.396 | 99.00th=[24511], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:23:47.396 | 99.99th=[26084] 00:23:47.396 bw ( KiB/s): min=12288, max=12312, per=25.12%, avg=12300.00, stdev=16.97, samples=2 00:23:47.396 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:23:47.396 lat (usec) : 1000=0.02% 00:23:47.396 lat (msec) : 4=0.23%, 10=0.47%, 20=18.84%, 50=80.44% 00:23:47.396 cpu : usr=3.19%, sys=8.18%, ctx=773, majf=0, minf=9 00:23:47.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:47.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.396 issued rwts: total=2946,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.396 00:23:47.396 Run status group 0 (all jobs): 00:23:47.396 READ: bw=45.4MiB/s (47.7MB/s), 11.2MiB/s-11.5MiB/s (11.8MB/s-12.0MB/s), io=45.6MiB (47.8MB), run=1003-1004msec 00:23:47.396 WRITE: bw=47.8MiB/s (50.1MB/s), 12.0MiB/s-12.0MiB/s (12.5MB/s-12.5MB/s), io=48.0MiB (50.3MB), run=1003-1004msec 00:23:47.396 00:23:47.396 Disk stats (read/write): 00:23:47.396 nvme0n1: ios=2610/2665, merge=0/0, ticks=12732/11553, in_queue=24285, util=90.18% 00:23:47.396 nvme0n2: ios=2609/2581, merge=0/0, ticks=13721/11790, in_queue=25511, util=90.51% 00:23:47.396 nvme0n3: ios=2598/2656, merge=0/0, ticks=13559/11852, in_queue=25411, util=90.99% 00:23:47.396 nvme0n4: ios=2560/2680, merge=0/0, ticks=13032/11438, in_queue=24470, util=89.71% 00:23:47.396 11:11:15 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:23:47.396 [global] 00:23:47.396 thread=1 00:23:47.396 invalidate=1 00:23:47.396 rw=randwrite 00:23:47.396 time_based=1 00:23:47.396 runtime=1 00:23:47.396 ioengine=libaio 00:23:47.396 direct=1 00:23:47.396 bs=4096 00:23:47.396 iodepth=128 00:23:47.396 norandommap=0 00:23:47.396 numjobs=1 00:23:47.396 00:23:47.396 verify_dump=1 00:23:47.396 verify_backlog=512 00:23:47.396 verify_state_save=0 00:23:47.396 do_verify=1 00:23:47.396 verify=crc32c-intel 00:23:47.396 [job0] 00:23:47.396 filename=/dev/nvme0n1 00:23:47.396 [job1] 00:23:47.396 filename=/dev/nvme0n2 00:23:47.396 [job2] 00:23:47.396 filename=/dev/nvme0n3 00:23:47.396 [job3] 00:23:47.396 filename=/dev/nvme0n4 00:23:47.396 Could not set queue depth (nvme0n1) 00:23:47.396 Could not set queue depth (nvme0n2) 00:23:47.396 Could not set queue depth (nvme0n3) 00:23:47.396 Could not set queue depth (nvme0n4) 00:23:47.396 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:47.396 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:47.396 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:47.396 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:47.396 fio-3.35 00:23:47.396 Starting 4 threads 00:23:48.772 00:23:48.772 job0: (groupid=0, jobs=1): err= 0: pid=92477: Thu Apr 18 11:11:16 2024 00:23:48.772 read: IOPS=2930, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1003msec) 00:23:48.772 slat (usec): min=7, max=9992, avg=171.20, stdev=845.53 00:23:48.772 clat (usec): min=2204, max=31435, avg=21143.27, stdev=3303.72 00:23:48.772 lat (usec): min=2243, max=31496, avg=21314.47, stdev=3370.22 00:23:48.772 clat percentiles (usec): 00:23:48.772 | 1.00th=[11994], 5.00th=[15270], 10.00th=[16909], 20.00th=[20055], 00:23:48.772 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 00:23:48.772 | 70.00th=[21627], 80.00th=[22152], 90.00th=[25297], 95.00th=[26870], 00:23:48.772 | 99.00th=[28967], 99.50th=[30016], 99.90th=[30278], 99.95th=[31065], 00:23:48.772 | 99.99th=[31327] 00:23:48.772 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:23:48.772 slat (usec): min=11, max=9002, avg=151.95, stdev=650.81 00:23:48.772 clat (usec): min=12179, max=30610, avg=20950.11, stdev=2367.52 00:23:48.772 lat (usec): min=12206, max=31170, avg=21102.06, stdev=2423.28 00:23:48.772 clat percentiles (usec): 00:23:48.772 | 1.00th=[14222], 5.00th=[16909], 10.00th=[19268], 20.00th=[19792], 00:23:48.772 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:23:48.772 | 70.00th=[21365], 80.00th=[21627], 90.00th=[22414], 95.00th=[25822], 00:23:48.772 | 99.00th=[29492], 99.50th=[30016], 99.90th=[30540], 99.95th=[30540], 00:23:48.772 | 99.99th=[30540] 00:23:48.772 bw ( KiB/s): min=12288, max=12288, per=23.82%, avg=12288.00, stdev= 0.00, samples=2 00:23:48.772 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:23:48.772 lat (msec) : 4=0.03%, 10=0.42%, 20=21.36%, 50=78.19% 00:23:48.772 cpu : usr=2.50%, sys=11.28%, ctx=385, majf=0, minf=9 00:23:48.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:48.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.772 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.772 job1: (groupid=0, jobs=1): err= 0: pid=92478: Thu Apr 18 11:11:16 2024 00:23:48.772 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:23:48.772 slat (usec): min=6, max=15919, avg=130.35, stdev=918.13 00:23:48.772 clat (usec): min=5870, max=33865, avg=18296.65, stdev=4610.71 00:23:48.772 lat (usec): min=5877, max=33881, avg=18427.00, stdev=4667.14 00:23:48.772 clat percentiles (usec): 00:23:48.772 | 1.00th=[ 7963], 5.00th=[12256], 10.00th=[13960], 20.00th=[14615], 00:23:48.772 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17433], 60.00th=[18220], 00:23:48.772 | 70.00th=[19268], 80.00th=[21365], 90.00th=[24511], 95.00th=[28181], 00:23:48.772 | 99.00th=[32113], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:23:48.772 | 99.99th=[33817] 00:23:48.772 write: IOPS=3612, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1006msec); 0 zone resets 00:23:48.772 slat (usec): min=4, max=15247, avg=119.32, stdev=683.46 00:23:48.772 clat (usec): min=3579, max=85427, avg=17043.23, stdev=6851.22 00:23:48.772 lat (usec): min=3622, max=85432, avg=17162.55, stdev=6884.36 00:23:48.772 clat percentiles (usec): 00:23:48.772 | 1.00th=[ 5735], 5.00th=[ 7832], 10.00th=[ 9896], 20.00th=[13829], 00:23:48.772 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17695], 60.00th=[17957], 00:23:48.772 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19792], 95.00th=[20841], 00:23:48.772 | 99.00th=[52691], 99.50th=[67634], 99.90th=[83362], 99.95th=[83362], 00:23:48.772 | 99.99th=[85459] 00:23:48.772 bw ( KiB/s): min=12432, max=16240, per=27.80%, avg=14336.00, stdev=2692.66, samples=2 00:23:48.772 iops : min= 3108, max= 4060, avg=3584.00, stdev=673.17, samples=2 00:23:48.772 lat (msec) : 4=0.17%, 10=6.05%, 20=76.30%, 50=16.96%, 100=0.53% 00:23:48.772 cpu : usr=3.68%, sys=10.25%, ctx=480, majf=0, minf=11 00:23:48.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:48.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.772 issued rwts: total=3584,3634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.772 job2: (groupid=0, jobs=1): err= 0: pid=92479: Thu Apr 18 11:11:16 2024 00:23:48.772 read: IOPS=3051, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:23:48.772 slat (usec): min=10, max=9742, avg=167.35, stdev=876.25 00:23:48.772 clat (usec): min=1947, max=31124, avg=20712.35, stdev=3196.62 00:23:48.772 lat (usec): min=6013, max=31163, avg=20879.70, stdev=3268.86 00:23:48.772 clat percentiles (usec): 00:23:48.772 | 1.00th=[11207], 5.00th=[14746], 10.00th=[16581], 20.00th=[19268], 00:23:48.772 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21365], 60.00th=[21365], 00:23:48.772 | 70.00th=[21627], 80.00th=[22152], 90.00th=[24249], 95.00th=[26084], 00:23:48.772 | 99.00th=[27657], 99.50th=[28181], 99.90th=[29754], 99.95th=[31065], 00:23:48.772 | 99.99th=[31065] 00:23:48.772 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:23:48.772 slat (usec): min=9, max=9127, avg=148.94, stdev=643.82 00:23:48.772 clat (usec): min=12035, max=30918, avg=20572.05, stdev=2497.92 00:23:48.772 lat (usec): min=12061, max=30972, avg=20720.99, stdev=2568.28 00:23:48.772 clat percentiles (usec): 00:23:48.772 | 1.00th=[12649], 5.00th=[15664], 10.00th=[18220], 20.00th=[19268], 00:23:48.772 | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:23:48.772 | 70.00th=[21365], 80.00th=[21627], 90.00th=[22152], 95.00th=[25035], 00:23:48.772 | 99.00th=[28705], 99.50th=[28705], 99.90th=[30278], 99.95th=[30540], 00:23:48.772 | 99.99th=[30802] 00:23:48.772 bw ( KiB/s): min=12288, max=12288, per=23.82%, avg=12288.00, stdev= 0.00, samples=2 00:23:48.772 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:23:48.772 lat (msec) : 2=0.02%, 10=0.46%, 20=27.41%, 50=72.12% 00:23:48.772 cpu : usr=2.99%, sys=11.07%, ctx=404, majf=0, minf=15 00:23:48.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:48.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.772 issued rwts: total=3064,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.772 job3: (groupid=0, jobs=1): err= 0: pid=92480: Thu Apr 18 11:11:16 2024 00:23:48.772 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:23:48.772 slat (usec): min=5, max=18403, avg=176.15, stdev=1139.35 00:23:48.772 clat (usec): min=7477, max=38968, avg=21816.31, stdev=5376.87 00:23:48.772 lat (usec): min=7496, max=39041, avg=21992.46, stdev=5432.76 00:23:48.772 clat percentiles (usec): 00:23:48.772 | 1.00th=[ 8586], 5.00th=[15664], 10.00th=[16581], 20.00th=[17171], 00:23:48.772 | 30.00th=[19268], 40.00th=[20055], 50.00th=[20579], 60.00th=[21103], 00:23:48.772 | 70.00th=[22938], 80.00th=[26084], 90.00th=[29230], 95.00th=[32900], 00:23:48.772 | 99.00th=[36963], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:23:48.772 | 99.99th=[39060] 00:23:48.772 write: IOPS=3222, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1011msec); 0 zone resets 00:23:48.772 slat (usec): min=5, max=16716, avg=131.37, stdev=627.23 00:23:48.773 clat (usec): min=6288, max=39025, avg=18778.19, stdev=4125.04 00:23:48.773 lat (usec): min=6313, max=39038, avg=18909.56, stdev=4172.83 00:23:48.773 clat percentiles (usec): 00:23:48.773 | 1.00th=[ 7177], 5.00th=[ 9503], 10.00th=[11338], 20.00th=[16712], 00:23:48.773 | 30.00th=[18744], 40.00th=[19530], 50.00th=[20317], 60.00th=[20841], 00:23:48.773 | 70.00th=[21365], 80.00th=[21627], 90.00th=[21890], 95.00th=[22152], 00:23:48.773 | 99.00th=[22676], 99.50th=[22676], 99.90th=[38011], 99.95th=[39060], 00:23:48.773 | 99.99th=[39060] 00:23:48.773 bw ( KiB/s): min=12400, max=12648, per=24.28%, avg=12524.00, stdev=175.36, samples=2 00:23:48.773 iops : min= 3100, max= 3162, avg=3131.00, stdev=43.84, samples=2 00:23:48.773 lat (msec) : 10=3.85%, 20=38.47%, 50=57.68% 00:23:48.773 cpu : usr=3.66%, sys=9.11%, ctx=449, majf=0, minf=12 00:23:48.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:48.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.773 issued rwts: total=3072,3258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.773 00:23:48.773 Run status group 0 (all jobs): 00:23:48.773 READ: bw=48.9MiB/s (51.3MB/s), 11.4MiB/s-13.9MiB/s (12.0MB/s-14.6MB/s), io=49.4MiB (51.9MB), run=1003-1011msec 00:23:48.773 WRITE: bw=50.4MiB/s (52.8MB/s), 12.0MiB/s-14.1MiB/s (12.5MB/s-14.8MB/s), io=50.9MiB (53.4MB), run=1003-1011msec 00:23:48.773 00:23:48.773 Disk stats (read/write): 00:23:48.773 nvme0n1: ios=2610/2631, merge=0/0, ticks=26501/24600, in_queue=51101, util=89.87% 00:23:48.773 nvme0n2: ios=3031/3142, merge=0/0, ticks=50984/52208, in_queue=103192, util=89.78% 00:23:48.773 nvme0n3: ios=2594/2735, merge=0/0, ticks=25940/25179, in_queue=51119, util=90.62% 00:23:48.773 nvme0n4: ios=2560/2799, merge=0/0, ticks=52942/51094, in_queue=104036, util=89.73% 00:23:48.773 11:11:16 -- target/fio.sh@55 -- # sync 00:23:48.773 11:11:17 -- target/fio.sh@59 -- # fio_pid=92495 00:23:48.773 11:11:17 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:23:48.773 11:11:17 -- target/fio.sh@61 -- # sleep 3 00:23:48.773 [global] 00:23:48.773 thread=1 00:23:48.773 invalidate=1 00:23:48.773 rw=read 00:23:48.773 time_based=1 00:23:48.773 runtime=10 00:23:48.773 ioengine=libaio 00:23:48.773 direct=1 00:23:48.773 bs=4096 00:23:48.773 iodepth=1 00:23:48.773 norandommap=1 00:23:48.773 numjobs=1 00:23:48.773 00:23:48.773 [job0] 00:23:48.773 filename=/dev/nvme0n1 00:23:48.773 [job1] 00:23:48.773 filename=/dev/nvme0n2 00:23:48.773 [job2] 00:23:48.773 filename=/dev/nvme0n3 00:23:48.773 [job3] 00:23:48.773 filename=/dev/nvme0n4 00:23:48.773 Could not set queue depth (nvme0n1) 00:23:48.773 Could not set queue depth (nvme0n2) 00:23:48.773 Could not set queue depth (nvme0n3) 00:23:48.773 Could not set queue depth (nvme0n4) 00:23:48.773 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:48.773 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:48.773 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:48.773 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:48.773 fio-3.35 00:23:48.773 Starting 4 threads 00:23:52.078 11:11:20 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:23:52.078 fio: pid=92538, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:52.078 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=44818432, buflen=4096 00:23:52.078 11:11:20 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:23:52.078 fio: pid=92537, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:52.078 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=31809536, buflen=4096 00:23:52.078 11:11:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:52.078 11:11:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:23:52.336 fio: pid=92535, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:52.336 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35889152, buflen=4096 00:23:52.336 11:11:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:52.336 11:11:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:23:52.595 fio: pid=92536, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:52.595 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=59183104, buflen=4096 00:23:52.595 00:23:52.595 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92535: Thu Apr 18 11:11:21 2024 00:23:52.595 read: IOPS=2572, BW=10.0MiB/s (10.5MB/s)(34.2MiB/3406msec) 00:23:52.595 slat (usec): min=9, max=8278, avg=19.57, stdev=153.73 00:23:52.596 clat (usec): min=144, max=3963, avg=367.55, stdev=120.60 00:23:52.596 lat (usec): min=158, max=8493, avg=387.11, stdev=194.72 00:23:52.596 clat percentiles (usec): 00:23:52.596 | 1.00th=[ 182], 5.00th=[ 206], 10.00th=[ 221], 20.00th=[ 245], 00:23:52.596 | 30.00th=[ 277], 40.00th=[ 359], 50.00th=[ 392], 60.00th=[ 416], 00:23:52.596 | 70.00th=[ 437], 80.00th=[ 457], 90.00th=[ 490], 95.00th=[ 519], 00:23:52.596 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 693], 99.95th=[ 1287], 00:23:52.596 | 99.99th=[ 3949] 00:23:52.596 bw ( KiB/s): min= 8736, max=14504, per=21.73%, avg=9906.67, stdev=2261.17, samples=6 00:23:52.596 iops : min= 2184, max= 3626, avg=2476.67, stdev=565.29, samples=6 00:23:52.596 lat (usec) : 250=22.10%, 500=70.73%, 750=7.08%, 1000=0.01% 00:23:52.596 lat (msec) : 2=0.05%, 4=0.02% 00:23:52.596 cpu : usr=0.73%, sys=3.64%, ctx=8774, majf=0, minf=1 00:23:52.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 issued rwts: total=8763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:52.596 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92536: Thu Apr 18 11:11:21 2024 00:23:52.596 read: IOPS=3928, BW=15.3MiB/s (16.1MB/s)(56.4MiB/3678msec) 00:23:52.596 slat (usec): min=12, max=10811, avg=20.28, stdev=164.51 00:23:52.596 clat (usec): min=3, max=2050, avg=232.81, stdev=55.18 00:23:52.596 lat (usec): min=156, max=11230, avg=253.09, stdev=174.38 00:23:52.596 clat percentiles (usec): 00:23:52.596 | 1.00th=[ 159], 5.00th=[ 178], 10.00th=[ 192], 20.00th=[ 204], 00:23:52.596 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:23:52.596 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:23:52.596 | 99.00th=[ 326], 99.50th=[ 379], 99.90th=[ 750], 99.95th=[ 1467], 00:23:52.596 | 99.99th=[ 2008] 00:23:52.596 bw ( KiB/s): min=15336, max=16040, per=34.39%, avg=15679.29, stdev=328.46, samples=7 00:23:52.596 iops : min= 3834, max= 4010, avg=3919.71, stdev=81.99, samples=7 00:23:52.596 lat (usec) : 4=0.01%, 250=74.48%, 500=25.31%, 750=0.08%, 1000=0.03% 00:23:52.596 lat (msec) : 2=0.06%, 4=0.01% 00:23:52.596 cpu : usr=1.12%, sys=5.25%, ctx=14464, majf=0, minf=1 00:23:52.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 issued rwts: total=14450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:52.596 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92537: Thu Apr 18 11:11:21 2024 00:23:52.596 read: IOPS=2454, BW=9818KiB/s (10.1MB/s)(30.3MiB/3164msec) 00:23:52.596 slat (usec): min=9, max=9607, avg=19.30, stdev=130.62 00:23:52.596 clat (usec): min=165, max=7139, avg=386.38, stdev=139.61 00:23:52.596 lat (usec): min=193, max=10075, avg=405.68, stdev=190.75 00:23:52.596 clat percentiles (usec): 00:23:52.596 | 1.00th=[ 196], 5.00th=[ 217], 10.00th=[ 235], 20.00th=[ 269], 00:23:52.596 | 30.00th=[ 351], 40.00th=[ 383], 50.00th=[ 404], 60.00th=[ 424], 00:23:52.596 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 490], 95.00th=[ 519], 00:23:52.596 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 1401], 99.95th=[ 2114], 00:23:52.596 | 99.99th=[ 7111] 00:23:52.596 bw ( KiB/s): min= 8744, max=14424, per=21.62%, avg=9858.67, stdev=2244.07, samples=6 00:23:52.596 iops : min= 2186, max= 3606, avg=2464.67, stdev=561.02, samples=6 00:23:52.596 lat (usec) : 250=14.75%, 500=77.83%, 750=7.24%, 1000=0.04% 00:23:52.596 lat (msec) : 2=0.08%, 4=0.03%, 10=0.03% 00:23:52.596 cpu : usr=1.04%, sys=3.35%, ctx=7779, majf=0, minf=1 00:23:52.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 issued rwts: total=7767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:52.596 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92538: Thu Apr 18 11:11:21 2024 00:23:52.596 read: IOPS=3742, BW=14.6MiB/s (15.3MB/s)(42.7MiB/2924msec) 00:23:52.596 slat (usec): min=12, max=625, avg=17.68, stdev= 8.22 00:23:52.596 clat (usec): min=3, max=1217, avg=247.59, stdev=36.82 00:23:52.596 lat (usec): min=179, max=1233, avg=265.27, stdev=38.05 00:23:52.596 clat percentiles (usec): 00:23:52.596 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 225], 00:23:52.596 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:23:52.596 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:23:52.596 | 99.00th=[ 347], 99.50th=[ 383], 99.90th=[ 603], 99.95th=[ 742], 00:23:52.596 | 99.99th=[ 1172] 00:23:52.596 bw ( KiB/s): min=14552, max=15256, per=32.88%, avg=14988.80, stdev=280.89, samples=5 00:23:52.596 iops : min= 3638, max= 3814, avg=3747.20, stdev=70.22, samples=5 00:23:52.596 lat (usec) : 4=0.01%, 250=61.85%, 500=37.90%, 750=0.20% 00:23:52.596 lat (msec) : 2=0.04% 00:23:52.596 cpu : usr=0.99%, sys=5.61%, ctx=10944, majf=0, minf=1 00:23:52.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.596 issued rwts: total=10943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:52.596 00:23:52.596 Run status group 0 (all jobs): 00:23:52.596 READ: bw=44.5MiB/s (46.7MB/s), 9818KiB/s-15.3MiB/s (10.1MB/s-16.1MB/s), io=164MiB (172MB), run=2924-3678msec 00:23:52.596 00:23:52.596 Disk stats (read/write): 00:23:52.596 nvme0n1: ios=8595/0, merge=0/0, ticks=3199/0, in_queue=3199, util=95.62% 00:23:52.596 nvme0n2: ios=14163/0, merge=0/0, ticks=3386/0, in_queue=3386, util=95.66% 00:23:52.596 nvme0n3: ios=7646/0, merge=0/0, ticks=2971/0, in_queue=2971, util=96.15% 00:23:52.596 nvme0n4: ios=10747/0, merge=0/0, ticks=2733/0, in_queue=2733, util=96.79% 00:23:52.596 11:11:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:52.596 11:11:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:23:52.855 11:11:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:52.855 11:11:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:23:53.113 11:11:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:53.113 11:11:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:23:53.375 11:11:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:53.375 11:11:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:23:53.633 11:11:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:53.633 11:11:22 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:23:53.892 11:11:22 -- target/fio.sh@69 -- # fio_status=0 00:23:53.892 11:11:22 -- target/fio.sh@70 -- # wait 92495 00:23:53.892 11:11:22 -- target/fio.sh@70 -- # fio_status=4 00:23:53.892 11:11:22 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:53.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:53.892 11:11:22 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:53.892 11:11:22 -- common/autotest_common.sh@1205 -- # local i=0 00:23:54.151 11:11:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:54.151 11:11:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.151 11:11:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:54.151 11:11:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.151 11:11:22 -- common/autotest_common.sh@1217 -- # return 0 00:23:54.151 11:11:22 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:23:54.151 nvmf hotplug test: fio failed as expected 00:23:54.151 11:11:22 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:23:54.151 11:11:22 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.410 11:11:22 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:23:54.410 11:11:22 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:23:54.410 11:11:22 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:23:54.410 11:11:22 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:23:54.410 11:11:22 -- target/fio.sh@91 -- # nvmftestfini 00:23:54.410 11:11:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:54.410 11:11:22 -- nvmf/common.sh@117 -- # sync 00:23:54.410 11:11:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.410 11:11:22 -- nvmf/common.sh@120 -- # set +e 00:23:54.410 11:11:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.410 11:11:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.410 rmmod nvme_tcp 00:23:54.410 rmmod nvme_fabrics 00:23:54.410 rmmod nvme_keyring 00:23:54.410 11:11:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.410 11:11:22 -- nvmf/common.sh@124 -- # set -e 00:23:54.410 11:11:22 -- nvmf/common.sh@125 -- # return 0 00:23:54.410 11:11:22 -- nvmf/common.sh@478 -- # '[' -n 92014 ']' 00:23:54.410 11:11:22 -- nvmf/common.sh@479 -- # killprocess 92014 00:23:54.410 11:11:22 -- common/autotest_common.sh@936 -- # '[' -z 92014 ']' 00:23:54.410 11:11:22 -- common/autotest_common.sh@940 -- # kill -0 92014 00:23:54.410 11:11:22 -- common/autotest_common.sh@941 -- # uname 00:23:54.410 11:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:54.410 11:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92014 00:23:54.410 killing process with pid 92014 00:23:54.410 11:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:54.410 11:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:54.410 11:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92014' 00:23:54.410 11:11:22 -- common/autotest_common.sh@955 -- # kill 92014 00:23:54.410 11:11:22 -- common/autotest_common.sh@960 -- # wait 92014 00:23:54.672 11:11:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:54.672 11:11:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:54.672 11:11:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:54.672 11:11:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.672 11:11:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.672 11:11:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.672 11:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.672 11:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.672 11:11:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:54.672 00:23:54.672 real 0m19.212s 00:23:54.672 user 1m13.966s 00:23:54.672 sys 0m8.322s 00:23:54.672 11:11:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:54.672 ************************************ 00:23:54.672 END TEST nvmf_fio_target 00:23:54.672 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:23:54.672 ************************************ 00:23:54.672 11:11:23 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:54.672 11:11:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:54.672 11:11:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:54.672 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:23:54.672 ************************************ 00:23:54.672 START TEST nvmf_bdevio 00:23:54.673 ************************************ 00:23:54.673 11:11:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:54.932 * Looking for test storage... 00:23:54.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:54.932 11:11:23 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:54.932 11:11:23 -- nvmf/common.sh@7 -- # uname -s 00:23:54.932 11:11:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.932 11:11:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.932 11:11:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.932 11:11:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.932 11:11:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.932 11:11:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.932 11:11:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.932 11:11:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.932 11:11:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.932 11:11:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.932 11:11:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:54.932 11:11:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:54.932 11:11:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.932 11:11:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.932 11:11:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:54.932 11:11:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.932 11:11:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:54.932 11:11:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.932 11:11:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.932 11:11:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.932 11:11:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.932 11:11:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.932 11:11:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.932 11:11:23 -- paths/export.sh@5 -- # export PATH 00:23:54.932 11:11:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.932 11:11:23 -- nvmf/common.sh@47 -- # : 0 00:23:54.932 11:11:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.932 11:11:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.932 11:11:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.932 11:11:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.932 11:11:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.932 11:11:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.932 11:11:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.932 11:11:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.932 11:11:23 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.932 11:11:23 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.932 11:11:23 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:54.932 11:11:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:54.932 11:11:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.932 11:11:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:54.932 11:11:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:54.932 11:11:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:54.932 11:11:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.932 11:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.932 11:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.932 11:11:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:54.932 11:11:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:54.932 11:11:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:54.932 11:11:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:54.932 11:11:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:54.932 11:11:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:54.932 11:11:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.932 11:11:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.932 11:11:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:54.932 11:11:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:54.932 11:11:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:54.932 11:11:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:54.932 11:11:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:54.932 11:11:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.932 11:11:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:54.932 11:11:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:54.932 11:11:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:54.932 11:11:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:54.932 11:11:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:54.932 11:11:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:54.932 Cannot find device "nvmf_tgt_br" 00:23:54.932 11:11:23 -- nvmf/common.sh@155 -- # true 00:23:54.932 11:11:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:54.932 Cannot find device "nvmf_tgt_br2" 00:23:54.932 11:11:23 -- nvmf/common.sh@156 -- # true 00:23:54.932 11:11:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:54.932 11:11:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:54.932 Cannot find device "nvmf_tgt_br" 00:23:54.932 11:11:23 -- nvmf/common.sh@158 -- # true 00:23:54.932 11:11:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:54.932 Cannot find device "nvmf_tgt_br2" 00:23:54.932 11:11:23 -- nvmf/common.sh@159 -- # true 00:23:54.932 11:11:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:54.932 11:11:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:54.932 11:11:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:54.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:54.932 11:11:23 -- nvmf/common.sh@162 -- # true 00:23:54.932 11:11:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:54.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:54.932 11:11:23 -- nvmf/common.sh@163 -- # true 00:23:54.932 11:11:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:54.932 11:11:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:54.932 11:11:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:54.932 11:11:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:55.191 11:11:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:55.191 11:11:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:55.191 11:11:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:55.191 11:11:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:55.191 11:11:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:55.191 11:11:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:55.191 11:11:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:55.191 11:11:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:55.191 11:11:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:55.191 11:11:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:55.191 11:11:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:55.191 11:11:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:55.191 11:11:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:55.191 11:11:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:55.191 11:11:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:55.191 11:11:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:55.191 11:11:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:55.191 11:11:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:55.191 11:11:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:55.191 11:11:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:55.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:23:55.191 00:23:55.191 --- 10.0.0.2 ping statistics --- 00:23:55.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.191 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:55.191 11:11:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:55.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:55.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:23:55.192 00:23:55.192 --- 10.0.0.3 ping statistics --- 00:23:55.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.192 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:55.192 11:11:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:55.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:23:55.192 00:23:55.192 --- 10.0.0.1 ping statistics --- 00:23:55.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.192 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:23:55.192 11:11:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.192 11:11:23 -- nvmf/common.sh@422 -- # return 0 00:23:55.192 11:11:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:55.192 11:11:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.192 11:11:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:55.192 11:11:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:55.192 11:11:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.192 11:11:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:55.192 11:11:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:55.192 11:11:23 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:55.192 11:11:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:55.192 11:11:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:55.192 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 11:11:23 -- nvmf/common.sh@470 -- # nvmfpid=92870 00:23:55.192 11:11:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:23:55.192 11:11:23 -- nvmf/common.sh@471 -- # waitforlisten 92870 00:23:55.192 11:11:23 -- common/autotest_common.sh@817 -- # '[' -z 92870 ']' 00:23:55.192 11:11:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.192 11:11:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:55.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.192 11:11:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.192 11:11:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:55.192 11:11:23 -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 [2024-04-18 11:11:23.803779] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:55.192 [2024-04-18 11:11:23.803865] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.450 [2024-04-18 11:11:23.937682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.450 [2024-04-18 11:11:24.035231] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.450 [2024-04-18 11:11:24.035352] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.450 [2024-04-18 11:11:24.035373] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.450 [2024-04-18 11:11:24.035387] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.450 [2024-04-18 11:11:24.035410] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.450 [2024-04-18 11:11:24.035604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:55.450 [2024-04-18 11:11:24.035771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:55.450 [2024-04-18 11:11:24.036345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:55.450 [2024-04-18 11:11:24.036351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.383 11:11:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:56.383 11:11:24 -- common/autotest_common.sh@850 -- # return 0 00:23:56.383 11:11:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:56.383 11:11:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:56.383 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.383 11:11:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.383 11:11:24 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.383 11:11:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.383 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.383 [2024-04-18 11:11:24.811762] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.384 11:11:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.384 11:11:24 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:56.384 11:11:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.384 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.384 Malloc0 00:23:56.384 11:11:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.384 11:11:24 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.384 11:11:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.384 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.384 11:11:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.384 11:11:24 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:56.384 11:11:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.384 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.384 11:11:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.384 11:11:24 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.384 11:11:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.384 11:11:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.384 [2024-04-18 11:11:24.891883] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.384 11:11:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.384 11:11:24 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:23:56.384 11:11:24 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:56.384 11:11:24 -- nvmf/common.sh@521 -- # config=() 00:23:56.384 11:11:24 -- nvmf/common.sh@521 -- # local subsystem config 00:23:56.384 11:11:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:56.384 11:11:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:56.384 { 00:23:56.384 "params": { 00:23:56.384 "name": "Nvme$subsystem", 00:23:56.384 "trtype": "$TEST_TRANSPORT", 00:23:56.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.384 "adrfam": "ipv4", 00:23:56.384 "trsvcid": "$NVMF_PORT", 00:23:56.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.384 "hdgst": ${hdgst:-false}, 00:23:56.384 "ddgst": ${ddgst:-false} 00:23:56.384 }, 00:23:56.384 "method": "bdev_nvme_attach_controller" 00:23:56.384 } 00:23:56.384 EOF 00:23:56.384 )") 00:23:56.384 11:11:24 -- nvmf/common.sh@543 -- # cat 00:23:56.384 11:11:24 -- nvmf/common.sh@545 -- # jq . 00:23:56.384 11:11:24 -- nvmf/common.sh@546 -- # IFS=, 00:23:56.384 11:11:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:56.384 "params": { 00:23:56.384 "name": "Nvme1", 00:23:56.384 "trtype": "tcp", 00:23:56.384 "traddr": "10.0.0.2", 00:23:56.384 "adrfam": "ipv4", 00:23:56.384 "trsvcid": "4420", 00:23:56.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.384 "hdgst": false, 00:23:56.384 "ddgst": false 00:23:56.384 }, 00:23:56.384 "method": "bdev_nvme_attach_controller" 00:23:56.384 }' 00:23:56.384 [2024-04-18 11:11:24.945658] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:56.384 [2024-04-18 11:11:24.946191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92924 ] 00:23:56.644 [2024-04-18 11:11:25.088247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:56.644 [2024-04-18 11:11:25.187310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.644 [2024-04-18 11:11:25.187379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.644 [2024-04-18 11:11:25.187384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.902 I/O targets: 00:23:56.902 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:56.902 00:23:56.902 00:23:56.902 CUnit - A unit testing framework for C - Version 2.1-3 00:23:56.902 http://cunit.sourceforge.net/ 00:23:56.902 00:23:56.902 00:23:56.902 Suite: bdevio tests on: Nvme1n1 00:23:56.902 Test: blockdev write read block ...passed 00:23:56.902 Test: blockdev write zeroes read block ...passed 00:23:56.902 Test: blockdev write zeroes read no split ...passed 00:23:56.902 Test: blockdev write zeroes read split ...passed 00:23:56.902 Test: blockdev write zeroes read split partial ...passed 00:23:56.902 Test: blockdev reset ...[2024-04-18 11:11:25.483028] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:56.902 [2024-04-18 11:11:25.483154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x928990 (9): Bad file descriptor 00:23:56.902 [2024-04-18 11:11:25.494294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:56.902 passed 00:23:56.902 Test: blockdev write read 8 blocks ...passed 00:23:56.902 Test: blockdev write read size > 128k ...passed 00:23:56.902 Test: blockdev write read invalid size ...passed 00:23:56.902 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:56.902 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:56.902 Test: blockdev write read max offset ...passed 00:23:57.160 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:57.160 Test: blockdev writev readv 8 blocks ...passed 00:23:57.160 Test: blockdev writev readv 30 x 1block ...passed 00:23:57.160 Test: blockdev writev readv block ...passed 00:23:57.160 Test: blockdev writev readv size > 128k ...passed 00:23:57.160 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:57.160 Test: blockdev comparev and writev ...[2024-04-18 11:11:25.664728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.160 [2024-04-18 11:11:25.664779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.160 [2024-04-18 11:11:25.664800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.160 [2024-04-18 11:11:25.664812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.160 [2024-04-18 11:11:25.665231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.160 [2024-04-18 11:11:25.665258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.160 [2024-04-18 11:11:25.665276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.161 [2024-04-18 11:11:25.665286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.161 [2024-04-18 11:11:25.665579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.161 [2024-04-18 11:11:25.665605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.161 [2024-04-18 11:11:25.665622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.161 [2024-04-18 11:11:25.665632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.161 [2024-04-18 11:11:25.665984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.161 [2024-04-18 11:11:25.666009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.161 [2024-04-18 11:11:25.666026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.161 [2024-04-18 11:11:25.666048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.161 passed 00:23:57.161 Test: blockdev nvme passthru rw ...passed 00:23:57.161 Test: blockdev nvme passthru vendor specific ...[2024-04-18 11:11:25.748388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.161 [2024-04-18 11:11:25.748422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.161 [2024-04-18 11:11:25.748551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.161 [2024-04-18 11:11:25.748568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.161 [2024-04-18 11:11:25.748684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.161 [2024-04-18 11:11:25.748709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.161 [2024-04-18 11:11:25.748827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.161 [2024-04-18 11:11:25.748852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.161 passed 00:23:57.161 Test: blockdev nvme admin passthru ...passed 00:23:57.479 Test: blockdev copy ...passed 00:23:57.479 00:23:57.479 Run Summary: Type Total Ran Passed Failed Inactive 00:23:57.479 suites 1 1 n/a 0 0 00:23:57.479 tests 23 23 23 0 0 00:23:57.479 asserts 152 152 152 0 n/a 00:23:57.479 00:23:57.479 Elapsed time = 0.876 seconds 00:23:57.479 11:11:25 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.479 11:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.479 11:11:25 -- common/autotest_common.sh@10 -- # set +x 00:23:57.479 11:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.479 11:11:26 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:57.479 11:11:26 -- target/bdevio.sh@30 -- # nvmftestfini 00:23:57.479 11:11:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:57.479 11:11:26 -- nvmf/common.sh@117 -- # sync 00:23:57.479 11:11:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.479 11:11:26 -- nvmf/common.sh@120 -- # set +e 00:23:57.479 11:11:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.479 11:11:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.479 rmmod nvme_tcp 00:23:57.479 rmmod nvme_fabrics 00:23:57.479 rmmod nvme_keyring 00:23:57.479 11:11:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.479 11:11:26 -- nvmf/common.sh@124 -- # set -e 00:23:57.479 11:11:26 -- nvmf/common.sh@125 -- # return 0 00:23:57.479 11:11:26 -- nvmf/common.sh@478 -- # '[' -n 92870 ']' 00:23:57.479 11:11:26 -- nvmf/common.sh@479 -- # killprocess 92870 00:23:57.479 11:11:26 -- common/autotest_common.sh@936 -- # '[' -z 92870 ']' 00:23:57.479 11:11:26 -- common/autotest_common.sh@940 -- # kill -0 92870 00:23:57.479 11:11:26 -- common/autotest_common.sh@941 -- # uname 00:23:57.479 11:11:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:57.479 11:11:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92870 00:23:57.737 11:11:26 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:23:57.737 11:11:26 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:23:57.737 killing process with pid 92870 00:23:57.737 11:11:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92870' 00:23:57.737 11:11:26 -- common/autotest_common.sh@955 -- # kill 92870 00:23:57.737 11:11:26 -- common/autotest_common.sh@960 -- # wait 92870 00:23:57.737 11:11:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:57.737 11:11:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:57.737 11:11:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:57.737 11:11:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.737 11:11:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.737 11:11:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.737 11:11:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.737 11:11:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.737 11:11:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:57.998 00:23:57.998 real 0m3.097s 00:23:57.998 user 0m11.162s 00:23:57.998 sys 0m0.796s 00:23:57.998 11:11:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:57.998 ************************************ 00:23:57.998 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:23:57.998 END TEST nvmf_bdevio 00:23:57.998 ************************************ 00:23:57.998 11:11:26 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:23:57.998 11:11:26 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:57.998 11:11:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:57.998 11:11:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:57.998 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:23:57.998 ************************************ 00:23:57.998 START TEST nvmf_bdevio_no_huge 00:23:57.998 ************************************ 00:23:57.998 11:11:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:57.998 * Looking for test storage... 00:23:57.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:57.998 11:11:26 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:57.998 11:11:26 -- nvmf/common.sh@7 -- # uname -s 00:23:57.998 11:11:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.998 11:11:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.998 11:11:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.998 11:11:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.998 11:11:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.998 11:11:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.998 11:11:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.998 11:11:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.998 11:11:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.998 11:11:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.998 11:11:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:57.998 11:11:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:23:57.998 11:11:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.999 11:11:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.999 11:11:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:57.999 11:11:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.999 11:11:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.999 11:11:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.999 11:11:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.999 11:11:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.999 11:11:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.999 11:11:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.999 11:11:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.999 11:11:26 -- paths/export.sh@5 -- # export PATH 00:23:57.999 11:11:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.999 11:11:26 -- nvmf/common.sh@47 -- # : 0 00:23:57.999 11:11:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.999 11:11:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.999 11:11:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.999 11:11:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.999 11:11:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.999 11:11:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.999 11:11:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.999 11:11:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.999 11:11:26 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:57.999 11:11:26 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:57.999 11:11:26 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:57.999 11:11:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:57.999 11:11:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.999 11:11:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:57.999 11:11:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:57.999 11:11:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:57.999 11:11:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.999 11:11:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.999 11:11:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.999 11:11:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:57.999 11:11:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:57.999 11:11:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:57.999 11:11:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:57.999 11:11:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:57.999 11:11:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:57.999 11:11:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.999 11:11:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.999 11:11:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:57.999 11:11:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:57.999 11:11:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:57.999 11:11:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:57.999 11:11:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:57.999 11:11:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.999 11:11:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:57.999 11:11:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:57.999 11:11:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:57.999 11:11:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:57.999 11:11:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:57.999 11:11:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:58.259 Cannot find device "nvmf_tgt_br" 00:23:58.259 11:11:26 -- nvmf/common.sh@155 -- # true 00:23:58.259 11:11:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:58.259 Cannot find device "nvmf_tgt_br2" 00:23:58.259 11:11:26 -- nvmf/common.sh@156 -- # true 00:23:58.259 11:11:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:58.259 11:11:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:58.259 Cannot find device "nvmf_tgt_br" 00:23:58.259 11:11:26 -- nvmf/common.sh@158 -- # true 00:23:58.259 11:11:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:58.259 Cannot find device "nvmf_tgt_br2" 00:23:58.259 11:11:26 -- nvmf/common.sh@159 -- # true 00:23:58.259 11:11:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:58.259 11:11:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:58.259 11:11:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:58.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.259 11:11:26 -- nvmf/common.sh@162 -- # true 00:23:58.259 11:11:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:58.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.259 11:11:26 -- nvmf/common.sh@163 -- # true 00:23:58.259 11:11:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:58.259 11:11:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:58.259 11:11:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:58.259 11:11:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:58.259 11:11:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:58.259 11:11:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:58.259 11:11:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:58.259 11:11:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:58.259 11:11:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:58.259 11:11:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:58.259 11:11:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:58.259 11:11:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:58.259 11:11:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:58.259 11:11:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:58.259 11:11:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:58.259 11:11:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:58.259 11:11:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:58.259 11:11:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:58.259 11:11:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:58.518 11:11:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:58.518 11:11:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:58.518 11:11:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:58.518 11:11:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:58.518 11:11:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:58.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:58.518 00:23:58.518 --- 10.0.0.2 ping statistics --- 00:23:58.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.518 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:58.518 11:11:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:58.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:58.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:23:58.518 00:23:58.518 --- 10.0.0.3 ping statistics --- 00:23:58.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.518 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:58.518 11:11:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:58.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:58.518 00:23:58.518 --- 10.0.0.1 ping statistics --- 00:23:58.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.518 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:58.518 11:11:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.518 11:11:26 -- nvmf/common.sh@422 -- # return 0 00:23:58.518 11:11:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:58.518 11:11:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.518 11:11:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:58.518 11:11:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:58.518 11:11:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.518 11:11:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:58.518 11:11:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:58.518 11:11:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:58.518 11:11:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:58.518 11:11:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:58.518 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:23:58.518 11:11:26 -- nvmf/common.sh@470 -- # nvmfpid=93107 00:23:58.518 11:11:26 -- nvmf/common.sh@471 -- # waitforlisten 93107 00:23:58.518 11:11:26 -- common/autotest_common.sh@817 -- # '[' -z 93107 ']' 00:23:58.518 11:11:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:58.518 11:11:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.518 11:11:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:58.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.518 11:11:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.518 11:11:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:58.518 11:11:26 -- common/autotest_common.sh@10 -- # set +x 00:23:58.518 [2024-04-18 11:11:27.033825] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:58.518 [2024-04-18 11:11:27.033942] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:58.777 [2024-04-18 11:11:27.171216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.777 [2024-04-18 11:11:27.295671] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.777 [2024-04-18 11:11:27.295748] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.777 [2024-04-18 11:11:27.295762] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.777 [2024-04-18 11:11:27.295773] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.777 [2024-04-18 11:11:27.295783] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.777 [2024-04-18 11:11:27.296216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:58.777 [2024-04-18 11:11:27.296517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:58.777 [2024-04-18 11:11:27.296648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:58.777 [2024-04-18 11:11:27.296654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.712 11:11:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:59.712 11:11:28 -- common/autotest_common.sh@850 -- # return 0 00:23:59.712 11:11:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:59.712 11:11:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:59.712 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:23:59.712 11:11:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.712 11:11:28 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.712 11:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.712 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:23:59.712 [2024-04-18 11:11:28.127415] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.712 11:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.712 11:11:28 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:59.712 11:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.712 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:23:59.712 Malloc0 00:23:59.712 11:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.712 11:11:28 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.712 11:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.712 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:23:59.712 11:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.712 11:11:28 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.712 11:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.712 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:23:59.712 11:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.712 11:11:28 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.712 11:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.712 11:11:28 -- common/autotest_common.sh@10 -- # set +x 00:23:59.712 [2024-04-18 11:11:28.175740] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.712 11:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.712 11:11:28 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:59.712 11:11:28 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:59.712 11:11:28 -- nvmf/common.sh@521 -- # config=() 00:23:59.712 11:11:28 -- nvmf/common.sh@521 -- # local subsystem config 00:23:59.712 11:11:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:59.712 11:11:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:59.712 { 00:23:59.712 "params": { 00:23:59.712 "name": "Nvme$subsystem", 00:23:59.712 "trtype": "$TEST_TRANSPORT", 00:23:59.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.712 "adrfam": "ipv4", 00:23:59.712 "trsvcid": "$NVMF_PORT", 00:23:59.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.713 "hdgst": ${hdgst:-false}, 00:23:59.713 "ddgst": ${ddgst:-false} 00:23:59.713 }, 00:23:59.713 "method": "bdev_nvme_attach_controller" 00:23:59.713 } 00:23:59.713 EOF 00:23:59.713 )") 00:23:59.713 11:11:28 -- nvmf/common.sh@543 -- # cat 00:23:59.713 11:11:28 -- nvmf/common.sh@545 -- # jq . 00:23:59.713 11:11:28 -- nvmf/common.sh@546 -- # IFS=, 00:23:59.713 11:11:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:59.713 "params": { 00:23:59.713 "name": "Nvme1", 00:23:59.713 "trtype": "tcp", 00:23:59.713 "traddr": "10.0.0.2", 00:23:59.713 "adrfam": "ipv4", 00:23:59.713 "trsvcid": "4420", 00:23:59.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.713 "hdgst": false, 00:23:59.713 "ddgst": false 00:23:59.713 }, 00:23:59.713 "method": "bdev_nvme_attach_controller" 00:23:59.713 }' 00:23:59.713 [2024-04-18 11:11:28.229548] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:59.713 [2024-04-18 11:11:28.229665] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid93161 ] 00:23:59.971 [2024-04-18 11:11:28.368178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.971 [2024-04-18 11:11:28.501197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.971 [2024-04-18 11:11:28.501280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.971 [2024-04-18 11:11:28.501286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.229 I/O targets: 00:24:00.229 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:00.229 00:24:00.229 00:24:00.229 CUnit - A unit testing framework for C - Version 2.1-3 00:24:00.229 http://cunit.sourceforge.net/ 00:24:00.229 00:24:00.229 00:24:00.229 Suite: bdevio tests on: Nvme1n1 00:24:00.229 Test: blockdev write read block ...passed 00:24:00.229 Test: blockdev write zeroes read block ...passed 00:24:00.229 Test: blockdev write zeroes read no split ...passed 00:24:00.229 Test: blockdev write zeroes read split ...passed 00:24:00.229 Test: blockdev write zeroes read split partial ...passed 00:24:00.229 Test: blockdev reset ...[2024-04-18 11:11:28.833130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.229 [2024-04-18 11:11:28.833274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc7180 (9): Bad file descriptor 00:24:00.229 [2024-04-18 11:11:28.851860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:00.229 passed 00:24:00.229 Test: blockdev write read 8 blocks ...passed 00:24:00.229 Test: blockdev write read size > 128k ...passed 00:24:00.229 Test: blockdev write read invalid size ...passed 00:24:00.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:00.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:00.488 Test: blockdev write read max offset ...passed 00:24:00.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:00.488 Test: blockdev writev readv 8 blocks ...passed 00:24:00.488 Test: blockdev writev readv 30 x 1block ...passed 00:24:00.488 Test: blockdev writev readv block ...passed 00:24:00.488 Test: blockdev writev readv size > 128k ...passed 00:24:00.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:00.488 Test: blockdev comparev and writev ...[2024-04-18 11:11:29.027742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.027816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.027836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.027847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.028316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.028341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.028359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.028369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.028745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.028775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.028792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.028802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.029182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.029211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.029229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:00.488 [2024-04-18 11:11:29.029239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.488 passed 00:24:00.488 Test: blockdev nvme passthru rw ...passed 00:24:00.488 Test: blockdev nvme passthru vendor specific ...[2024-04-18 11:11:29.113376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:00.488 [2024-04-18 11:11:29.113423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.488 [2024-04-18 11:11:29.113556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:00.489 [2024-04-18 11:11:29.113572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.489 [2024-04-18 11:11:29.113687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:00.489 [2024-04-18 11:11:29.113702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.489 [2024-04-18 11:11:29.113817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:00.489 [2024-04-18 11:11:29.113834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.489 passed 00:24:00.746 Test: blockdev nvme admin passthru ...passed 00:24:00.746 Test: blockdev copy ...passed 00:24:00.746 00:24:00.746 Run Summary: Type Total Ran Passed Failed Inactive 00:24:00.746 suites 1 1 n/a 0 0 00:24:00.746 tests 23 23 23 0 0 00:24:00.746 asserts 152 152 152 0 n/a 00:24:00.746 00:24:00.746 Elapsed time = 0.952 seconds 00:24:01.004 11:11:29 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.004 11:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.004 11:11:29 -- common/autotest_common.sh@10 -- # set +x 00:24:01.004 11:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.004 11:11:29 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:01.004 11:11:29 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:01.004 11:11:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:01.004 11:11:29 -- nvmf/common.sh@117 -- # sync 00:24:01.004 11:11:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.004 11:11:29 -- nvmf/common.sh@120 -- # set +e 00:24:01.004 11:11:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.004 11:11:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.004 rmmod nvme_tcp 00:24:01.004 rmmod nvme_fabrics 00:24:01.004 rmmod nvme_keyring 00:24:01.004 11:11:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.004 11:11:29 -- nvmf/common.sh@124 -- # set -e 00:24:01.004 11:11:29 -- nvmf/common.sh@125 -- # return 0 00:24:01.004 11:11:29 -- nvmf/common.sh@478 -- # '[' -n 93107 ']' 00:24:01.004 11:11:29 -- nvmf/common.sh@479 -- # killprocess 93107 00:24:01.004 11:11:29 -- common/autotest_common.sh@936 -- # '[' -z 93107 ']' 00:24:01.004 11:11:29 -- common/autotest_common.sh@940 -- # kill -0 93107 00:24:01.004 11:11:29 -- common/autotest_common.sh@941 -- # uname 00:24:01.004 11:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.004 11:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93107 00:24:01.004 11:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:24:01.004 11:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:24:01.004 killing process with pid 93107 00:24:01.004 11:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93107' 00:24:01.004 11:11:29 -- common/autotest_common.sh@955 -- # kill 93107 00:24:01.004 11:11:29 -- common/autotest_common.sh@960 -- # wait 93107 00:24:01.569 11:11:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:01.569 11:11:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:01.569 11:11:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:01.569 11:11:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.569 11:11:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.569 11:11:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.569 11:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.569 11:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.569 11:11:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:01.569 00:24:01.569 real 0m3.560s 00:24:01.569 user 0m12.732s 00:24:01.569 sys 0m1.390s 00:24:01.569 11:11:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:01.569 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:01.569 ************************************ 00:24:01.570 END TEST nvmf_bdevio_no_huge 00:24:01.570 ************************************ 00:24:01.570 11:11:30 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:01.570 11:11:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:01.570 11:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:01.570 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:01.570 ************************************ 00:24:01.570 START TEST nvmf_tls 00:24:01.570 ************************************ 00:24:01.570 11:11:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:01.828 * Looking for test storage... 00:24:01.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:01.828 11:11:30 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:01.828 11:11:30 -- nvmf/common.sh@7 -- # uname -s 00:24:01.828 11:11:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.828 11:11:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.828 11:11:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.828 11:11:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.828 11:11:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.828 11:11:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.828 11:11:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.828 11:11:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.828 11:11:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.828 11:11:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.828 11:11:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:24:01.828 11:11:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:24:01.828 11:11:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.828 11:11:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.828 11:11:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:01.828 11:11:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.829 11:11:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:01.829 11:11:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.829 11:11:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.829 11:11:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.829 11:11:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.829 11:11:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.829 11:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.829 11:11:30 -- paths/export.sh@5 -- # export PATH 00:24:01.829 11:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.829 11:11:30 -- nvmf/common.sh@47 -- # : 0 00:24:01.829 11:11:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.829 11:11:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.829 11:11:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.829 11:11:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.829 11:11:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.829 11:11:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.829 11:11:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.829 11:11:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.829 11:11:30 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:01.829 11:11:30 -- target/tls.sh@62 -- # nvmftestinit 00:24:01.829 11:11:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:01.829 11:11:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.829 11:11:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:01.829 11:11:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:01.829 11:11:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:01.829 11:11:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.829 11:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.829 11:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.829 11:11:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:01.829 11:11:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:01.829 11:11:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:01.829 11:11:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:01.829 11:11:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:01.829 11:11:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:01.829 11:11:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.829 11:11:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.829 11:11:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:01.829 11:11:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:01.829 11:11:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:01.829 11:11:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:01.829 11:11:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:01.829 11:11:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.829 11:11:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:01.829 11:11:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:01.829 11:11:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:01.829 11:11:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:01.829 11:11:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:01.829 11:11:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:01.829 Cannot find device "nvmf_tgt_br" 00:24:01.829 11:11:30 -- nvmf/common.sh@155 -- # true 00:24:01.829 11:11:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:01.829 Cannot find device "nvmf_tgt_br2" 00:24:01.829 11:11:30 -- nvmf/common.sh@156 -- # true 00:24:01.829 11:11:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:01.829 11:11:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:01.829 Cannot find device "nvmf_tgt_br" 00:24:01.829 11:11:30 -- nvmf/common.sh@158 -- # true 00:24:01.829 11:11:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:01.829 Cannot find device "nvmf_tgt_br2" 00:24:01.829 11:11:30 -- nvmf/common.sh@159 -- # true 00:24:01.829 11:11:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:01.829 11:11:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:01.829 11:11:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:01.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.829 11:11:30 -- nvmf/common.sh@162 -- # true 00:24:01.829 11:11:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:01.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.829 11:11:30 -- nvmf/common.sh@163 -- # true 00:24:01.829 11:11:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:01.829 11:11:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:01.829 11:11:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:01.829 11:11:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:01.829 11:11:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:01.829 11:11:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.088 11:11:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.088 11:11:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:02.088 11:11:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:02.088 11:11:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:02.088 11:11:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:02.088 11:11:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:02.088 11:11:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:02.088 11:11:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.088 11:11:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.088 11:11:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.088 11:11:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:02.088 11:11:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:02.088 11:11:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.088 11:11:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.088 11:11:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.088 11:11:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.088 11:11:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.088 11:11:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:02.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:24:02.088 00:24:02.088 --- 10.0.0.2 ping statistics --- 00:24:02.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.088 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:24:02.088 11:11:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:02.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:24:02.088 00:24:02.088 --- 10.0.0.3 ping statistics --- 00:24:02.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.088 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:02.088 11:11:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:24:02.088 00:24:02.088 --- 10.0.0.1 ping statistics --- 00:24:02.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.088 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:02.088 11:11:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.088 11:11:30 -- nvmf/common.sh@422 -- # return 0 00:24:02.088 11:11:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:02.088 11:11:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.088 11:11:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:02.088 11:11:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:02.088 11:11:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.088 11:11:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:02.088 11:11:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:02.088 11:11:30 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:02.088 11:11:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:02.088 11:11:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:02.088 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:02.088 11:11:30 -- nvmf/common.sh@470 -- # nvmfpid=93357 00:24:02.088 11:11:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:02.088 11:11:30 -- nvmf/common.sh@471 -- # waitforlisten 93357 00:24:02.088 11:11:30 -- common/autotest_common.sh@817 -- # '[' -z 93357 ']' 00:24:02.088 11:11:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.088 11:11:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.088 11:11:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.088 11:11:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.088 11:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:02.088 [2024-04-18 11:11:30.712009] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:02.088 [2024-04-18 11:11:30.712172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.347 [2024-04-18 11:11:30.854400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.347 [2024-04-18 11:11:30.953690] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.347 [2024-04-18 11:11:30.953760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.347 [2024-04-18 11:11:30.953773] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.347 [2024-04-18 11:11:30.953789] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.347 [2024-04-18 11:11:30.953799] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.347 [2024-04-18 11:11:30.953842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.283 11:11:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:03.283 11:11:31 -- common/autotest_common.sh@850 -- # return 0 00:24:03.283 11:11:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:03.283 11:11:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:03.283 11:11:31 -- common/autotest_common.sh@10 -- # set +x 00:24:03.283 11:11:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.283 11:11:31 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:03.283 11:11:31 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:03.283 true 00:24:03.283 11:11:31 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:03.283 11:11:31 -- target/tls.sh@73 -- # jq -r .tls_version 00:24:03.541 11:11:32 -- target/tls.sh@73 -- # version=0 00:24:03.541 11:11:32 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:03.541 11:11:32 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:03.799 11:11:32 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:03.799 11:11:32 -- target/tls.sh@81 -- # jq -r .tls_version 00:24:04.057 11:11:32 -- target/tls.sh@81 -- # version=13 00:24:04.057 11:11:32 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:04.057 11:11:32 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:04.316 11:11:32 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.316 11:11:32 -- target/tls.sh@89 -- # jq -r .tls_version 00:24:04.573 11:11:33 -- target/tls.sh@89 -- # version=7 00:24:04.573 11:11:33 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:04.573 11:11:33 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.573 11:11:33 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:04.831 11:11:33 -- target/tls.sh@96 -- # ktls=false 00:24:04.831 11:11:33 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:04.831 11:11:33 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:05.089 11:11:33 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:05.089 11:11:33 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.347 11:11:33 -- target/tls.sh@104 -- # ktls=true 00:24:05.347 11:11:33 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:05.347 11:11:33 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:05.913 11:11:34 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.913 11:11:34 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:05.913 11:11:34 -- target/tls.sh@112 -- # ktls=false 00:24:05.913 11:11:34 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:05.913 11:11:34 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:05.913 11:11:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:05.914 11:11:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:05.914 11:11:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:05.914 11:11:34 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:05.914 11:11:34 -- nvmf/common.sh@693 -- # digest=1 00:24:05.914 11:11:34 -- nvmf/common.sh@694 -- # python - 00:24:06.171 11:11:34 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:06.171 11:11:34 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:06.171 11:11:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:06.171 11:11:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.172 11:11:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:06.172 11:11:34 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:24:06.172 11:11:34 -- nvmf/common.sh@693 -- # digest=1 00:24:06.172 11:11:34 -- nvmf/common.sh@694 -- # python - 00:24:06.172 11:11:34 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:06.172 11:11:34 -- target/tls.sh@121 -- # mktemp 00:24:06.172 11:11:34 -- target/tls.sh@121 -- # key_path=/tmp/tmp.C023K450lU 00:24:06.172 11:11:34 -- target/tls.sh@122 -- # mktemp 00:24:06.172 11:11:34 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.fzyX5LWcRd 00:24:06.172 11:11:34 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:06.172 11:11:34 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:06.172 11:11:34 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.C023K450lU 00:24:06.172 11:11:34 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.fzyX5LWcRd 00:24:06.172 11:11:34 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:06.471 11:11:34 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:24:06.729 11:11:35 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.C023K450lU 00:24:06.729 11:11:35 -- target/tls.sh@49 -- # local key=/tmp/tmp.C023K450lU 00:24:06.729 11:11:35 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:06.988 [2024-04-18 11:11:35.502055] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.988 11:11:35 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:07.246 11:11:35 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:07.505 [2024-04-18 11:11:36.014142] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.505 [2024-04-18 11:11:36.014359] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.505 11:11:36 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:07.764 malloc0 00:24:07.764 11:11:36 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:08.022 11:11:36 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C023K450lU 00:24:08.281 [2024-04-18 11:11:36.818339] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:08.281 11:11:36 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.C023K450lU 00:24:20.477 Initializing NVMe Controllers 00:24:20.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.477 Initialization complete. Launching workers. 00:24:20.477 ======================================================== 00:24:20.477 Latency(us) 00:24:20.477 Device Information : IOPS MiB/s Average min max 00:24:20.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9460.78 36.96 6766.26 1537.89 10981.31 00:24:20.477 ======================================================== 00:24:20.477 Total : 9460.78 36.96 6766.26 1537.89 10981.31 00:24:20.477 00:24:20.477 11:11:47 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C023K450lU 00:24:20.477 11:11:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:20.477 11:11:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:20.477 11:11:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:20.477 11:11:47 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.C023K450lU' 00:24:20.477 11:11:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.477 11:11:47 -- target/tls.sh@28 -- # bdevperf_pid=93712 00:24:20.477 11:11:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.477 11:11:47 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.477 11:11:47 -- target/tls.sh@31 -- # waitforlisten 93712 /var/tmp/bdevperf.sock 00:24:20.477 11:11:47 -- common/autotest_common.sh@817 -- # '[' -z 93712 ']' 00:24:20.477 11:11:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.477 11:11:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:20.477 11:11:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.477 11:11:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:20.477 11:11:47 -- common/autotest_common.sh@10 -- # set +x 00:24:20.477 [2024-04-18 11:11:47.097579] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:20.477 [2024-04-18 11:11:47.097683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93712 ] 00:24:20.477 [2024-04-18 11:11:47.239950] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.477 [2024-04-18 11:11:47.338434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.477 11:11:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.477 11:11:48 -- common/autotest_common.sh@850 -- # return 0 00:24:20.477 11:11:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C023K450lU 00:24:20.477 [2024-04-18 11:11:48.247569] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.477 [2024-04-18 11:11:48.247756] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:20.477 TLSTESTn1 00:24:20.477 11:11:48 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:20.477 Running I/O for 10 seconds... 00:24:30.446 00:24:30.446 Latency(us) 00:24:30.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.446 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.446 Verification LBA range: start 0x0 length 0x2000 00:24:30.446 TLSTESTn1 : 10.02 3947.12 15.42 0.00 0.00 32365.59 7387.69 44326.17 00:24:30.446 =================================================================================================================== 00:24:30.446 Total : 3947.12 15.42 0.00 0.00 32365.59 7387.69 44326.17 00:24:30.446 0 00:24:30.446 11:11:58 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.446 11:11:58 -- target/tls.sh@45 -- # killprocess 93712 00:24:30.446 11:11:58 -- common/autotest_common.sh@936 -- # '[' -z 93712 ']' 00:24:30.446 11:11:58 -- common/autotest_common.sh@940 -- # kill -0 93712 00:24:30.446 11:11:58 -- common/autotest_common.sh@941 -- # uname 00:24:30.446 11:11:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:30.446 11:11:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93712 00:24:30.446 killing process with pid 93712 00:24:30.446 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.446 00:24:30.446 Latency(us) 00:24:30.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.446 =================================================================================================================== 00:24:30.446 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.446 11:11:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:30.446 11:11:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:30.446 11:11:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93712' 00:24:30.446 11:11:58 -- common/autotest_common.sh@955 -- # kill 93712 00:24:30.446 [2024-04-18 11:11:58.488692] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:30.446 11:11:58 -- common/autotest_common.sh@960 -- # wait 93712 00:24:30.446 11:11:58 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fzyX5LWcRd 00:24:30.446 11:11:58 -- common/autotest_common.sh@638 -- # local es=0 00:24:30.446 11:11:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fzyX5LWcRd 00:24:30.446 11:11:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:30.446 11:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:30.446 11:11:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:30.446 11:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:30.446 11:11:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fzyX5LWcRd 00:24:30.446 11:11:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:30.446 11:11:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:30.446 11:11:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:30.446 11:11:58 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fzyX5LWcRd' 00:24:30.446 11:11:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:30.446 11:11:58 -- target/tls.sh@28 -- # bdevperf_pid=93857 00:24:30.446 11:11:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:30.446 11:11:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:30.446 11:11:58 -- target/tls.sh@31 -- # waitforlisten 93857 /var/tmp/bdevperf.sock 00:24:30.446 11:11:58 -- common/autotest_common.sh@817 -- # '[' -z 93857 ']' 00:24:30.446 11:11:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.446 11:11:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.446 11:11:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.446 11:11:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.446 11:11:58 -- common/autotest_common.sh@10 -- # set +x 00:24:30.446 [2024-04-18 11:11:58.787715] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:30.446 [2024-04-18 11:11:58.787851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93857 ] 00:24:30.446 [2024-04-18 11:11:58.923050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.447 [2024-04-18 11:11:59.021097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.381 11:11:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:31.381 11:11:59 -- common/autotest_common.sh@850 -- # return 0 00:24:31.381 11:11:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fzyX5LWcRd 00:24:31.639 [2024-04-18 11:12:00.083667] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.639 [2024-04-18 11:12:00.083790] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:31.639 [2024-04-18 11:12:00.094226] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:31.639 [2024-04-18 11:12:00.094384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177bec0 (107): Transport endpoint is not connected 00:24:31.639 [2024-04-18 11:12:00.095374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177bec0 (9): Bad file descriptor 00:24:31.639 [2024-04-18 11:12:00.096370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.639 [2024-04-18 11:12:00.096393] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:31.639 [2024-04-18 11:12:00.096412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.639 2024/04/18 11:12:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.fzyX5LWcRd subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:31.639 request: 00:24:31.639 { 00:24:31.639 "method": "bdev_nvme_attach_controller", 00:24:31.639 "params": { 00:24:31.639 "name": "TLSTEST", 00:24:31.639 "trtype": "tcp", 00:24:31.639 "traddr": "10.0.0.2", 00:24:31.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.639 "adrfam": "ipv4", 00:24:31.639 "trsvcid": "4420", 00:24:31.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.639 "psk": "/tmp/tmp.fzyX5LWcRd" 00:24:31.639 } 00:24:31.639 } 00:24:31.639 Got JSON-RPC error response 00:24:31.639 GoRPCClient: error on JSON-RPC call 00:24:31.639 11:12:00 -- target/tls.sh@36 -- # killprocess 93857 00:24:31.639 11:12:00 -- common/autotest_common.sh@936 -- # '[' -z 93857 ']' 00:24:31.639 11:12:00 -- common/autotest_common.sh@940 -- # kill -0 93857 00:24:31.639 11:12:00 -- common/autotest_common.sh@941 -- # uname 00:24:31.639 11:12:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:31.639 11:12:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93857 00:24:31.639 killing process with pid 93857 00:24:31.639 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.639 00:24:31.639 Latency(us) 00:24:31.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.639 =================================================================================================================== 00:24:31.639 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:31.639 11:12:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:31.639 11:12:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:31.639 11:12:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93857' 00:24:31.639 11:12:00 -- common/autotest_common.sh@955 -- # kill 93857 00:24:31.639 [2024-04-18 11:12:00.145347] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:31.639 11:12:00 -- common/autotest_common.sh@960 -- # wait 93857 00:24:31.902 11:12:00 -- target/tls.sh@37 -- # return 1 00:24:31.902 11:12:00 -- common/autotest_common.sh@641 -- # es=1 00:24:31.902 11:12:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:31.902 11:12:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:31.902 11:12:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:31.902 11:12:00 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C023K450lU 00:24:31.902 11:12:00 -- common/autotest_common.sh@638 -- # local es=0 00:24:31.902 11:12:00 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C023K450lU 00:24:31.902 11:12:00 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:31.902 11:12:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:31.902 11:12:00 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:31.902 11:12:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:31.902 11:12:00 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C023K450lU 00:24:31.902 11:12:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.902 11:12:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.902 11:12:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:31.902 11:12:00 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.C023K450lU' 00:24:31.902 11:12:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.902 11:12:00 -- target/tls.sh@28 -- # bdevperf_pid=93908 00:24:31.902 11:12:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.902 11:12:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.902 11:12:00 -- target/tls.sh@31 -- # waitforlisten 93908 /var/tmp/bdevperf.sock 00:24:31.902 11:12:00 -- common/autotest_common.sh@817 -- # '[' -z 93908 ']' 00:24:31.902 11:12:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.902 11:12:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:31.902 11:12:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.902 11:12:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:31.902 11:12:00 -- common/autotest_common.sh@10 -- # set +x 00:24:31.902 [2024-04-18 11:12:00.419562] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:31.902 [2024-04-18 11:12:00.419692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93908 ] 00:24:32.161 [2024-04-18 11:12:00.560128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.161 [2024-04-18 11:12:00.649439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.095 11:12:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:33.095 11:12:01 -- common/autotest_common.sh@850 -- # return 0 00:24:33.095 11:12:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.C023K450lU 00:24:33.095 [2024-04-18 11:12:01.595989] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.095 [2024-04-18 11:12:01.596124] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:33.095 [2024-04-18 11:12:01.604334] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:33.095 [2024-04-18 11:12:01.604378] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:33.096 [2024-04-18 11:12:01.604431] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:33.096 [2024-04-18 11:12:01.604706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0ec0 (107): Transport endpoint is not connected 00:24:33.096 [2024-04-18 11:12:01.605688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0ec0 (9): Bad file descriptor 00:24:33.096 [2024-04-18 11:12:01.606689] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.096 [2024-04-18 11:12:01.606712] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:33.096 [2024-04-18 11:12:01.606727] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.096 2024/04/18 11:12:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.C023K450lU subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:33.096 request: 00:24:33.096 { 00:24:33.096 "method": "bdev_nvme_attach_controller", 00:24:33.096 "params": { 00:24:33.096 "name": "TLSTEST", 00:24:33.096 "trtype": "tcp", 00:24:33.096 "traddr": "10.0.0.2", 00:24:33.096 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:33.096 "adrfam": "ipv4", 00:24:33.096 "trsvcid": "4420", 00:24:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.096 "psk": "/tmp/tmp.C023K450lU" 00:24:33.096 } 00:24:33.096 } 00:24:33.096 Got JSON-RPC error response 00:24:33.096 GoRPCClient: error on JSON-RPC call 00:24:33.096 11:12:01 -- target/tls.sh@36 -- # killprocess 93908 00:24:33.096 11:12:01 -- common/autotest_common.sh@936 -- # '[' -z 93908 ']' 00:24:33.096 11:12:01 -- common/autotest_common.sh@940 -- # kill -0 93908 00:24:33.096 11:12:01 -- common/autotest_common.sh@941 -- # uname 00:24:33.096 11:12:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:33.096 11:12:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93908 00:24:33.096 killing process with pid 93908 00:24:33.096 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.096 00:24:33.096 Latency(us) 00:24:33.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.096 =================================================================================================================== 00:24:33.096 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.096 11:12:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:33.096 11:12:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:33.096 11:12:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93908' 00:24:33.096 11:12:01 -- common/autotest_common.sh@955 -- # kill 93908 00:24:33.096 [2024-04-18 11:12:01.654563] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:33.096 11:12:01 -- common/autotest_common.sh@960 -- # wait 93908 00:24:33.354 11:12:01 -- target/tls.sh@37 -- # return 1 00:24:33.354 11:12:01 -- common/autotest_common.sh@641 -- # es=1 00:24:33.354 11:12:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:33.354 11:12:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:33.354 11:12:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:33.354 11:12:01 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C023K450lU 00:24:33.354 11:12:01 -- common/autotest_common.sh@638 -- # local es=0 00:24:33.354 11:12:01 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C023K450lU 00:24:33.354 11:12:01 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:33.354 11:12:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:33.354 11:12:01 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:33.354 11:12:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:33.354 11:12:01 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C023K450lU 00:24:33.354 11:12:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:33.354 11:12:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:33.354 11:12:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:33.354 11:12:01 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.C023K450lU' 00:24:33.354 11:12:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.354 11:12:01 -- target/tls.sh@28 -- # bdevperf_pid=93948 00:24:33.354 11:12:01 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.354 11:12:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.354 11:12:01 -- target/tls.sh@31 -- # waitforlisten 93948 /var/tmp/bdevperf.sock 00:24:33.354 11:12:01 -- common/autotest_common.sh@817 -- # '[' -z 93948 ']' 00:24:33.354 11:12:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.354 11:12:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:33.354 11:12:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.354 11:12:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:33.354 11:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:33.354 [2024-04-18 11:12:01.926006] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:33.354 [2024-04-18 11:12:01.926106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93948 ] 00:24:33.612 [2024-04-18 11:12:02.064739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.612 [2024-04-18 11:12:02.161416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.546 11:12:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:34.546 11:12:02 -- common/autotest_common.sh@850 -- # return 0 00:24:34.546 11:12:02 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C023K450lU 00:24:34.546 [2024-04-18 11:12:03.161178] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.546 [2024-04-18 11:12:03.161312] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:34.546 [2024-04-18 11:12:03.166369] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:34.546 [2024-04-18 11:12:03.166409] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:34.546 [2024-04-18 11:12:03.166462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:34.546 [2024-04-18 11:12:03.167075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833ec0 (107): Transport endpoint is not connected 00:24:34.546 [2024-04-18 11:12:03.168062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x833ec0 (9): Bad file descriptor 00:24:34.546 [2024-04-18 11:12:03.169062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:34.546 [2024-04-18 11:12:03.169098] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:34.546 [2024-04-18 11:12:03.169114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:34.546 2024/04/18 11:12:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.C023K450lU subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:34.546 request: 00:24:34.546 { 00:24:34.546 "method": "bdev_nvme_attach_controller", 00:24:34.546 "params": { 00:24:34.546 "name": "TLSTEST", 00:24:34.546 "trtype": "tcp", 00:24:34.546 "traddr": "10.0.0.2", 00:24:34.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.546 "adrfam": "ipv4", 00:24:34.546 "trsvcid": "4420", 00:24:34.546 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:34.546 "psk": "/tmp/tmp.C023K450lU" 00:24:34.546 } 00:24:34.546 } 00:24:34.546 Got JSON-RPC error response 00:24:34.546 GoRPCClient: error on JSON-RPC call 00:24:34.805 11:12:03 -- target/tls.sh@36 -- # killprocess 93948 00:24:34.805 11:12:03 -- common/autotest_common.sh@936 -- # '[' -z 93948 ']' 00:24:34.805 11:12:03 -- common/autotest_common.sh@940 -- # kill -0 93948 00:24:34.805 11:12:03 -- common/autotest_common.sh@941 -- # uname 00:24:34.805 11:12:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.805 11:12:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93948 00:24:34.805 killing process with pid 93948 00:24:34.805 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.805 00:24:34.805 Latency(us) 00:24:34.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.805 =================================================================================================================== 00:24:34.805 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:34.805 11:12:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:34.805 11:12:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:34.805 11:12:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93948' 00:24:34.805 11:12:03 -- common/autotest_common.sh@955 -- # kill 93948 00:24:34.805 [2024-04-18 11:12:03.218661] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:34.805 11:12:03 -- common/autotest_common.sh@960 -- # wait 93948 00:24:34.805 11:12:03 -- target/tls.sh@37 -- # return 1 00:24:34.805 11:12:03 -- common/autotest_common.sh@641 -- # es=1 00:24:34.805 11:12:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:34.805 11:12:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:34.805 11:12:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:34.805 11:12:03 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:34.805 11:12:03 -- common/autotest_common.sh@638 -- # local es=0 00:24:34.805 11:12:03 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:34.805 11:12:03 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:34.805 11:12:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:34.805 11:12:03 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:34.805 11:12:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:34.805 11:12:03 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:34.805 11:12:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:34.805 11:12:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:34.805 11:12:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:34.805 11:12:03 -- target/tls.sh@23 -- # psk= 00:24:34.805 11:12:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:34.805 11:12:03 -- target/tls.sh@28 -- # bdevperf_pid=93994 00:24:34.805 11:12:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:34.805 11:12:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:34.805 11:12:03 -- target/tls.sh@31 -- # waitforlisten 93994 /var/tmp/bdevperf.sock 00:24:34.805 11:12:03 -- common/autotest_common.sh@817 -- # '[' -z 93994 ']' 00:24:34.805 11:12:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.805 11:12:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:34.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.805 11:12:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.805 11:12:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:34.805 11:12:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.064 [2024-04-18 11:12:03.482857] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:35.064 [2024-04-18 11:12:03.482954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93994 ] 00:24:35.064 [2024-04-18 11:12:03.616108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.321 [2024-04-18 11:12:03.713262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.887 11:12:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:35.887 11:12:04 -- common/autotest_common.sh@850 -- # return 0 00:24:35.887 11:12:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:36.146 [2024-04-18 11:12:04.734742] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:36.146 [2024-04-18 11:12:04.736360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8abd0 (9): Bad file descriptor 00:24:36.146 [2024-04-18 11:12:04.737354] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.146 [2024-04-18 11:12:04.737377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:36.146 [2024-04-18 11:12:04.737391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.146 2024/04/18 11:12:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:36.146 request: 00:24:36.146 { 00:24:36.146 "method": "bdev_nvme_attach_controller", 00:24:36.146 "params": { 00:24:36.146 "name": "TLSTEST", 00:24:36.146 "trtype": "tcp", 00:24:36.146 "traddr": "10.0.0.2", 00:24:36.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.146 "adrfam": "ipv4", 00:24:36.146 "trsvcid": "4420", 00:24:36.146 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:24:36.146 } 00:24:36.146 } 00:24:36.146 Got JSON-RPC error response 00:24:36.146 GoRPCClient: error on JSON-RPC call 00:24:36.146 11:12:04 -- target/tls.sh@36 -- # killprocess 93994 00:24:36.146 11:12:04 -- common/autotest_common.sh@936 -- # '[' -z 93994 ']' 00:24:36.146 11:12:04 -- common/autotest_common.sh@940 -- # kill -0 93994 00:24:36.146 11:12:04 -- common/autotest_common.sh@941 -- # uname 00:24:36.146 11:12:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:36.146 11:12:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93994 00:24:36.146 11:12:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:36.146 11:12:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:36.146 killing process with pid 93994 00:24:36.146 11:12:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93994' 00:24:36.146 Received shutdown signal, test time was about 10.000000 seconds 00:24:36.146 00:24:36.146 Latency(us) 00:24:36.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.146 =================================================================================================================== 00:24:36.146 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:36.146 11:12:04 -- common/autotest_common.sh@955 -- # kill 93994 00:24:36.146 11:12:04 -- common/autotest_common.sh@960 -- # wait 93994 00:24:36.404 11:12:04 -- target/tls.sh@37 -- # return 1 00:24:36.404 11:12:04 -- common/autotest_common.sh@641 -- # es=1 00:24:36.404 11:12:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:36.404 11:12:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:36.404 11:12:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:36.404 11:12:04 -- target/tls.sh@158 -- # killprocess 93357 00:24:36.404 11:12:04 -- common/autotest_common.sh@936 -- # '[' -z 93357 ']' 00:24:36.404 11:12:04 -- common/autotest_common.sh@940 -- # kill -0 93357 00:24:36.404 11:12:05 -- common/autotest_common.sh@941 -- # uname 00:24:36.404 11:12:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:36.404 11:12:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93357 00:24:36.404 11:12:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:36.404 11:12:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:36.404 killing process with pid 93357 00:24:36.404 11:12:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93357' 00:24:36.404 11:12:05 -- common/autotest_common.sh@955 -- # kill 93357 00:24:36.404 [2024-04-18 11:12:05.023839] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:36.404 11:12:05 -- common/autotest_common.sh@960 -- # wait 93357 00:24:36.663 11:12:05 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:36.663 11:12:05 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:36.663 11:12:05 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.663 11:12:05 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:36.663 11:12:05 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:36.663 11:12:05 -- nvmf/common.sh@693 -- # digest=2 00:24:36.663 11:12:05 -- nvmf/common.sh@694 -- # python - 00:24:36.922 11:12:05 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:36.922 11:12:05 -- target/tls.sh@160 -- # mktemp 00:24:36.922 11:12:05 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.crRmH0pMpj 00:24:36.922 11:12:05 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:36.922 11:12:05 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.crRmH0pMpj 00:24:36.922 11:12:05 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:36.922 11:12:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:36.922 11:12:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:36.922 11:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:36.922 11:12:05 -- nvmf/common.sh@470 -- # nvmfpid=94055 00:24:36.922 11:12:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.922 11:12:05 -- nvmf/common.sh@471 -- # waitforlisten 94055 00:24:36.922 11:12:05 -- common/autotest_common.sh@817 -- # '[' -z 94055 ']' 00:24:36.922 11:12:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.922 11:12:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:36.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.922 11:12:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.922 11:12:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:36.922 11:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:36.922 [2024-04-18 11:12:05.385975] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:36.922 [2024-04-18 11:12:05.386131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.922 [2024-04-18 11:12:05.528516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.181 [2024-04-18 11:12:05.624239] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.181 [2024-04-18 11:12:05.624291] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.181 [2024-04-18 11:12:05.624303] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.181 [2024-04-18 11:12:05.624311] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.181 [2024-04-18 11:12:05.624319] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.181 [2024-04-18 11:12:05.624359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.117 11:12:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:38.117 11:12:06 -- common/autotest_common.sh@850 -- # return 0 00:24:38.117 11:12:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:38.117 11:12:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:38.117 11:12:06 -- common/autotest_common.sh@10 -- # set +x 00:24:38.117 11:12:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.117 11:12:06 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.crRmH0pMpj 00:24:38.117 11:12:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.crRmH0pMpj 00:24:38.117 11:12:06 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.117 [2024-04-18 11:12:06.698133] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.117 11:12:06 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:38.376 11:12:06 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:38.634 [2024-04-18 11:12:07.178308] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.634 [2024-04-18 11:12:07.178586] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.634 11:12:07 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:38.896 malloc0 00:24:38.896 11:12:07 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:39.155 11:12:07 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.crRmH0pMpj 00:24:39.414 [2024-04-18 11:12:07.894628] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:39.414 11:12:07 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.crRmH0pMpj 00:24:39.414 11:12:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:39.414 11:12:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:39.414 11:12:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:39.414 11:12:07 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.crRmH0pMpj' 00:24:39.414 11:12:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.414 11:12:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.414 11:12:07 -- target/tls.sh@28 -- # bdevperf_pid=94152 00:24:39.414 11:12:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.414 11:12:07 -- target/tls.sh@31 -- # waitforlisten 94152 /var/tmp/bdevperf.sock 00:24:39.414 11:12:07 -- common/autotest_common.sh@817 -- # '[' -z 94152 ']' 00:24:39.414 11:12:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.414 11:12:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.414 11:12:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.414 11:12:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.414 11:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:39.414 [2024-04-18 11:12:07.962169] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:39.414 [2024-04-18 11:12:07.962262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94152 ] 00:24:39.672 [2024-04-18 11:12:08.102436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.672 [2024-04-18 11:12:08.201658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.607 11:12:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:40.607 11:12:08 -- common/autotest_common.sh@850 -- # return 0 00:24:40.607 11:12:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.crRmH0pMpj 00:24:40.607 [2024-04-18 11:12:09.141990] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.607 [2024-04-18 11:12:09.142114] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:40.607 TLSTESTn1 00:24:40.607 11:12:09 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:40.864 Running I/O for 10 seconds... 00:24:50.924 00:24:50.924 Latency(us) 00:24:50.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.924 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:50.924 Verification LBA range: start 0x0 length 0x2000 00:24:50.924 TLSTESTn1 : 10.02 3948.42 15.42 0.00 0.00 32351.93 8340.95 32172.22 00:24:50.924 =================================================================================================================== 00:24:50.924 Total : 3948.42 15.42 0.00 0.00 32351.93 8340.95 32172.22 00:24:50.924 0 00:24:50.924 11:12:19 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.924 11:12:19 -- target/tls.sh@45 -- # killprocess 94152 00:24:50.924 11:12:19 -- common/autotest_common.sh@936 -- # '[' -z 94152 ']' 00:24:50.924 11:12:19 -- common/autotest_common.sh@940 -- # kill -0 94152 00:24:50.924 11:12:19 -- common/autotest_common.sh@941 -- # uname 00:24:50.924 11:12:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:50.924 11:12:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94152 00:24:50.924 11:12:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:50.924 11:12:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:50.924 killing process with pid 94152 00:24:50.924 11:12:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94152' 00:24:50.924 11:12:19 -- common/autotest_common.sh@955 -- # kill 94152 00:24:50.924 Received shutdown signal, test time was about 10.000000 seconds 00:24:50.924 00:24:50.924 Latency(us) 00:24:50.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.924 =================================================================================================================== 00:24:50.924 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.924 [2024-04-18 11:12:19.420207] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:50.924 11:12:19 -- common/autotest_common.sh@960 -- # wait 94152 00:24:51.182 11:12:19 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.crRmH0pMpj 00:24:51.182 11:12:19 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.crRmH0pMpj 00:24:51.182 11:12:19 -- common/autotest_common.sh@638 -- # local es=0 00:24:51.182 11:12:19 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.crRmH0pMpj 00:24:51.182 11:12:19 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:24:51.182 11:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.182 11:12:19 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:24:51.182 11:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.182 11:12:19 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.crRmH0pMpj 00:24:51.182 11:12:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:51.182 11:12:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:51.182 11:12:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:51.182 11:12:19 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.crRmH0pMpj' 00:24:51.182 11:12:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.182 11:12:19 -- target/tls.sh@28 -- # bdevperf_pid=94305 00:24:51.182 11:12:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:51.182 11:12:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:51.182 11:12:19 -- target/tls.sh@31 -- # waitforlisten 94305 /var/tmp/bdevperf.sock 00:24:51.182 11:12:19 -- common/autotest_common.sh@817 -- # '[' -z 94305 ']' 00:24:51.182 11:12:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.182 11:12:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:51.182 11:12:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.182 11:12:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:51.182 11:12:19 -- common/autotest_common.sh@10 -- # set +x 00:24:51.182 [2024-04-18 11:12:19.699160] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:51.182 [2024-04-18 11:12:19.699534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94305 ] 00:24:51.440 [2024-04-18 11:12:19.844254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.440 [2024-04-18 11:12:19.938083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.374 11:12:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:52.374 11:12:20 -- common/autotest_common.sh@850 -- # return 0 00:24:52.374 11:12:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.crRmH0pMpj 00:24:52.374 [2024-04-18 11:12:20.981244] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.374 [2024-04-18 11:12:20.981650] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:52.374 [2024-04-18 11:12:20.981665] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.crRmH0pMpj 00:24:52.374 2024/04/18 11:12:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.crRmH0pMpj subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:24:52.374 request: 00:24:52.374 { 00:24:52.374 "method": "bdev_nvme_attach_controller", 00:24:52.374 "params": { 00:24:52.374 "name": "TLSTEST", 00:24:52.374 "trtype": "tcp", 00:24:52.374 "traddr": "10.0.0.2", 00:24:52.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.374 "adrfam": "ipv4", 00:24:52.374 "trsvcid": "4420", 00:24:52.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.374 "psk": "/tmp/tmp.crRmH0pMpj" 00:24:52.374 } 00:24:52.374 } 00:24:52.374 Got JSON-RPC error response 00:24:52.374 GoRPCClient: error on JSON-RPC call 00:24:52.374 11:12:21 -- target/tls.sh@36 -- # killprocess 94305 00:24:52.374 11:12:21 -- common/autotest_common.sh@936 -- # '[' -z 94305 ']' 00:24:52.374 11:12:21 -- common/autotest_common.sh@940 -- # kill -0 94305 00:24:52.374 11:12:21 -- common/autotest_common.sh@941 -- # uname 00:24:52.374 11:12:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:52.374 11:12:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94305 00:24:52.632 11:12:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:52.632 11:12:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:52.632 killing process with pid 94305 00:24:52.632 11:12:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94305' 00:24:52.632 11:12:21 -- common/autotest_common.sh@955 -- # kill 94305 00:24:52.632 Received shutdown signal, test time was about 10.000000 seconds 00:24:52.632 00:24:52.632 Latency(us) 00:24:52.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.632 =================================================================================================================== 00:24:52.632 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:52.632 11:12:21 -- common/autotest_common.sh@960 -- # wait 94305 00:24:52.632 11:12:21 -- target/tls.sh@37 -- # return 1 00:24:52.632 11:12:21 -- common/autotest_common.sh@641 -- # es=1 00:24:52.632 11:12:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:52.632 11:12:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:52.632 11:12:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:52.632 11:12:21 -- target/tls.sh@174 -- # killprocess 94055 00:24:52.632 11:12:21 -- common/autotest_common.sh@936 -- # '[' -z 94055 ']' 00:24:52.632 11:12:21 -- common/autotest_common.sh@940 -- # kill -0 94055 00:24:52.632 11:12:21 -- common/autotest_common.sh@941 -- # uname 00:24:52.632 11:12:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:52.632 11:12:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94055 00:24:52.632 11:12:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:52.632 11:12:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:52.632 killing process with pid 94055 00:24:52.632 11:12:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94055' 00:24:52.633 11:12:21 -- common/autotest_common.sh@955 -- # kill 94055 00:24:52.633 [2024-04-18 11:12:21.256866] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:52.633 11:12:21 -- common/autotest_common.sh@960 -- # wait 94055 00:24:52.891 11:12:21 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:52.891 11:12:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:52.891 11:12:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:52.891 11:12:21 -- common/autotest_common.sh@10 -- # set +x 00:24:52.891 11:12:21 -- nvmf/common.sh@470 -- # nvmfpid=94354 00:24:52.891 11:12:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:52.891 11:12:21 -- nvmf/common.sh@471 -- # waitforlisten 94354 00:24:52.891 11:12:21 -- common/autotest_common.sh@817 -- # '[' -z 94354 ']' 00:24:52.891 11:12:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.891 11:12:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:52.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.891 11:12:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.891 11:12:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:52.891 11:12:21 -- common/autotest_common.sh@10 -- # set +x 00:24:52.891 [2024-04-18 11:12:21.530938] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:52.891 [2024-04-18 11:12:21.531069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.148 [2024-04-18 11:12:21.664096] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.148 [2024-04-18 11:12:21.758642] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.148 [2024-04-18 11:12:21.758698] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.148 [2024-04-18 11:12:21.758710] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.148 [2024-04-18 11:12:21.758719] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.148 [2024-04-18 11:12:21.758727] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.148 [2024-04-18 11:12:21.758764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.406 11:12:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:53.406 11:12:21 -- common/autotest_common.sh@850 -- # return 0 00:24:53.406 11:12:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:53.406 11:12:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:53.406 11:12:21 -- common/autotest_common.sh@10 -- # set +x 00:24:53.406 11:12:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.406 11:12:21 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.crRmH0pMpj 00:24:53.406 11:12:21 -- common/autotest_common.sh@638 -- # local es=0 00:24:53.406 11:12:21 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.crRmH0pMpj 00:24:53.406 11:12:21 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:24:53.406 11:12:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:53.406 11:12:21 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:24:53.406 11:12:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:53.406 11:12:21 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.crRmH0pMpj 00:24:53.406 11:12:21 -- target/tls.sh@49 -- # local key=/tmp/tmp.crRmH0pMpj 00:24:53.406 11:12:21 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:53.665 [2024-04-18 11:12:22.179499] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.665 11:12:22 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:53.923 11:12:22 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:54.182 [2024-04-18 11:12:22.711916] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.182 [2024-04-18 11:12:22.712190] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.182 11:12:22 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:54.440 malloc0 00:24:54.440 11:12:22 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:54.699 11:12:23 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.crRmH0pMpj 00:24:54.958 [2024-04-18 11:12:23.508191] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:54.958 [2024-04-18 11:12:23.508239] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:54.958 [2024-04-18 11:12:23.508265] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:24:54.958 2024/04/18 11:12:23 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.crRmH0pMpj], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:24:54.958 request: 00:24:54.958 { 00:24:54.958 "method": "nvmf_subsystem_add_host", 00:24:54.958 "params": { 00:24:54.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.958 "host": "nqn.2016-06.io.spdk:host1", 00:24:54.958 "psk": "/tmp/tmp.crRmH0pMpj" 00:24:54.958 } 00:24:54.958 } 00:24:54.958 Got JSON-RPC error response 00:24:54.958 GoRPCClient: error on JSON-RPC call 00:24:54.958 11:12:23 -- common/autotest_common.sh@641 -- # es=1 00:24:54.958 11:12:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:54.958 11:12:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:54.958 11:12:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:54.958 11:12:23 -- target/tls.sh@180 -- # killprocess 94354 00:24:54.958 11:12:23 -- common/autotest_common.sh@936 -- # '[' -z 94354 ']' 00:24:54.958 11:12:23 -- common/autotest_common.sh@940 -- # kill -0 94354 00:24:54.958 11:12:23 -- common/autotest_common.sh@941 -- # uname 00:24:54.958 11:12:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.958 11:12:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94354 00:24:54.958 11:12:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:54.958 11:12:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:54.958 killing process with pid 94354 00:24:54.958 11:12:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94354' 00:24:54.958 11:12:23 -- common/autotest_common.sh@955 -- # kill 94354 00:24:54.958 11:12:23 -- common/autotest_common.sh@960 -- # wait 94354 00:24:55.217 11:12:23 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.crRmH0pMpj 00:24:55.217 11:12:23 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:55.217 11:12:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:55.217 11:12:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:55.217 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:24:55.217 11:12:23 -- nvmf/common.sh@470 -- # nvmfpid=94458 00:24:55.217 11:12:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:55.217 11:12:23 -- nvmf/common.sh@471 -- # waitforlisten 94458 00:24:55.217 11:12:23 -- common/autotest_common.sh@817 -- # '[' -z 94458 ']' 00:24:55.217 11:12:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.217 11:12:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:55.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.217 11:12:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.217 11:12:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:55.217 11:12:23 -- common/autotest_common.sh@10 -- # set +x 00:24:55.476 [2024-04-18 11:12:23.870978] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:55.476 [2024-04-18 11:12:23.871132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.476 [2024-04-18 11:12:24.013625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.476 [2024-04-18 11:12:24.116176] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.476 [2024-04-18 11:12:24.116232] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.476 [2024-04-18 11:12:24.116258] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.476 [2024-04-18 11:12:24.116266] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.476 [2024-04-18 11:12:24.116273] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.476 [2024-04-18 11:12:24.116318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.412 11:12:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:56.412 11:12:24 -- common/autotest_common.sh@850 -- # return 0 00:24:56.412 11:12:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:56.412 11:12:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:56.412 11:12:24 -- common/autotest_common.sh@10 -- # set +x 00:24:56.412 11:12:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.412 11:12:24 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.crRmH0pMpj 00:24:56.412 11:12:24 -- target/tls.sh@49 -- # local key=/tmp/tmp.crRmH0pMpj 00:24:56.412 11:12:24 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:56.670 [2024-04-18 11:12:25.180405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.670 11:12:25 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:56.929 11:12:25 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:57.188 [2024-04-18 11:12:25.688548] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.188 [2024-04-18 11:12:25.688891] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.188 11:12:25 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:57.446 malloc0 00:24:57.446 11:12:26 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:57.705 11:12:26 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.crRmH0pMpj 00:24:57.963 [2024-04-18 11:12:26.500963] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:57.963 11:12:26 -- target/tls.sh@188 -- # bdevperf_pid=94556 00:24:57.963 11:12:26 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:57.963 11:12:26 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:57.963 11:12:26 -- target/tls.sh@191 -- # waitforlisten 94556 /var/tmp/bdevperf.sock 00:24:57.963 11:12:26 -- common/autotest_common.sh@817 -- # '[' -z 94556 ']' 00:24:57.963 11:12:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.963 11:12:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:57.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.963 11:12:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.963 11:12:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:57.963 11:12:26 -- common/autotest_common.sh@10 -- # set +x 00:24:57.963 [2024-04-18 11:12:26.575275] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:57.963 [2024-04-18 11:12:26.575389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94556 ] 00:24:58.221 [2024-04-18 11:12:26.717810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.221 [2024-04-18 11:12:26.805535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.156 11:12:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.156 11:12:27 -- common/autotest_common.sh@850 -- # return 0 00:24:59.156 11:12:27 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.crRmH0pMpj 00:24:59.413 [2024-04-18 11:12:27.825628] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.413 [2024-04-18 11:12:27.825763] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:59.413 TLSTESTn1 00:24:59.413 11:12:27 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:59.672 11:12:28 -- target/tls.sh@196 -- # tgtconf='{ 00:24:59.672 "subsystems": [ 00:24:59.672 { 00:24:59.672 "subsystem": "keyring", 00:24:59.672 "config": [] 00:24:59.672 }, 00:24:59.672 { 00:24:59.672 "subsystem": "iobuf", 00:24:59.672 "config": [ 00:24:59.672 { 00:24:59.672 "method": "iobuf_set_options", 00:24:59.672 "params": { 00:24:59.672 "large_bufsize": 135168, 00:24:59.672 "large_pool_count": 1024, 00:24:59.672 "small_bufsize": 8192, 00:24:59.672 "small_pool_count": 8192 00:24:59.672 } 00:24:59.672 } 00:24:59.672 ] 00:24:59.672 }, 00:24:59.672 { 00:24:59.672 "subsystem": "sock", 00:24:59.672 "config": [ 00:24:59.672 { 00:24:59.672 "method": "sock_impl_set_options", 00:24:59.672 "params": { 00:24:59.672 "enable_ktls": false, 00:24:59.672 "enable_placement_id": 0, 00:24:59.672 "enable_quickack": false, 00:24:59.672 "enable_recv_pipe": true, 00:24:59.672 "enable_zerocopy_send_client": false, 00:24:59.672 "enable_zerocopy_send_server": true, 00:24:59.672 "impl_name": "posix", 00:24:59.672 "recv_buf_size": 2097152, 00:24:59.672 "send_buf_size": 2097152, 00:24:59.672 "tls_version": 0, 00:24:59.672 "zerocopy_threshold": 0 00:24:59.672 } 00:24:59.672 }, 00:24:59.672 { 00:24:59.672 "method": "sock_impl_set_options", 00:24:59.672 "params": { 00:24:59.672 "enable_ktls": false, 00:24:59.672 "enable_placement_id": 0, 00:24:59.672 "enable_quickack": false, 00:24:59.672 "enable_recv_pipe": true, 00:24:59.672 "enable_zerocopy_send_client": false, 00:24:59.672 "enable_zerocopy_send_server": true, 00:24:59.673 "impl_name": "ssl", 00:24:59.673 "recv_buf_size": 4096, 00:24:59.673 "send_buf_size": 4096, 00:24:59.673 "tls_version": 0, 00:24:59.673 "zerocopy_threshold": 0 00:24:59.673 } 00:24:59.673 } 00:24:59.673 ] 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "subsystem": "vmd", 00:24:59.673 "config": [] 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "subsystem": "accel", 00:24:59.673 "config": [ 00:24:59.673 { 00:24:59.673 "method": "accel_set_options", 00:24:59.673 "params": { 00:24:59.673 "buf_count": 2048, 00:24:59.673 "large_cache_size": 16, 00:24:59.673 "sequence_count": 2048, 00:24:59.673 "small_cache_size": 128, 00:24:59.673 "task_count": 2048 00:24:59.673 } 00:24:59.673 } 00:24:59.673 ] 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "subsystem": "bdev", 00:24:59.673 "config": [ 00:24:59.673 { 00:24:59.673 "method": "bdev_set_options", 00:24:59.673 "params": { 00:24:59.673 "bdev_auto_examine": true, 00:24:59.673 "bdev_io_cache_size": 256, 00:24:59.673 "bdev_io_pool_size": 65535, 00:24:59.673 "iobuf_large_cache_size": 16, 00:24:59.673 "iobuf_small_cache_size": 128 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "bdev_raid_set_options", 00:24:59.673 "params": { 00:24:59.673 "process_window_size_kb": 1024 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "bdev_iscsi_set_options", 00:24:59.673 "params": { 00:24:59.673 "timeout_sec": 30 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "bdev_nvme_set_options", 00:24:59.673 "params": { 00:24:59.673 "action_on_timeout": "none", 00:24:59.673 "allow_accel_sequence": false, 00:24:59.673 "arbitration_burst": 0, 00:24:59.673 "bdev_retry_count": 3, 00:24:59.673 "ctrlr_loss_timeout_sec": 0, 00:24:59.673 "delay_cmd_submit": true, 00:24:59.673 "dhchap_dhgroups": [ 00:24:59.673 "null", 00:24:59.673 "ffdhe2048", 00:24:59.673 "ffdhe3072", 00:24:59.673 "ffdhe4096", 00:24:59.673 "ffdhe6144", 00:24:59.673 "ffdhe8192" 00:24:59.673 ], 00:24:59.673 "dhchap_digests": [ 00:24:59.673 "sha256", 00:24:59.673 "sha384", 00:24:59.673 "sha512" 00:24:59.673 ], 00:24:59.673 "disable_auto_failback": false, 00:24:59.673 "fast_io_fail_timeout_sec": 0, 00:24:59.673 "generate_uuids": false, 00:24:59.673 "high_priority_weight": 0, 00:24:59.673 "io_path_stat": false, 00:24:59.673 "io_queue_requests": 0, 00:24:59.673 "keep_alive_timeout_ms": 10000, 00:24:59.673 "low_priority_weight": 0, 00:24:59.673 "medium_priority_weight": 0, 00:24:59.673 "nvme_adminq_poll_period_us": 10000, 00:24:59.673 "nvme_error_stat": false, 00:24:59.673 "nvme_ioq_poll_period_us": 0, 00:24:59.673 "rdma_cm_event_timeout_ms": 0, 00:24:59.673 "rdma_max_cq_size": 0, 00:24:59.673 "rdma_srq_size": 0, 00:24:59.673 "reconnect_delay_sec": 0, 00:24:59.673 "timeout_admin_us": 0, 00:24:59.673 "timeout_us": 0, 00:24:59.673 "transport_ack_timeout": 0, 00:24:59.673 "transport_retry_count": 4, 00:24:59.673 "transport_tos": 0 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "bdev_nvme_set_hotplug", 00:24:59.673 "params": { 00:24:59.673 "enable": false, 00:24:59.673 "period_us": 100000 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "bdev_malloc_create", 00:24:59.673 "params": { 00:24:59.673 "block_size": 4096, 00:24:59.673 "name": "malloc0", 00:24:59.673 "num_blocks": 8192, 00:24:59.673 "optimal_io_boundary": 0, 00:24:59.673 "physical_block_size": 4096, 00:24:59.673 "uuid": "3e1e73cc-1d05-4670-bbb7-47b3d5b67d19" 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "bdev_wait_for_examine" 00:24:59.673 } 00:24:59.673 ] 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "subsystem": "nbd", 00:24:59.673 "config": [] 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "subsystem": "scheduler", 00:24:59.673 "config": [ 00:24:59.673 { 00:24:59.673 "method": "framework_set_scheduler", 00:24:59.673 "params": { 00:24:59.673 "name": "static" 00:24:59.673 } 00:24:59.673 } 00:24:59.673 ] 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "subsystem": "nvmf", 00:24:59.673 "config": [ 00:24:59.673 { 00:24:59.673 "method": "nvmf_set_config", 00:24:59.673 "params": { 00:24:59.673 "admin_cmd_passthru": { 00:24:59.673 "identify_ctrlr": false 00:24:59.673 }, 00:24:59.673 "discovery_filter": "match_any" 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "nvmf_set_max_subsystems", 00:24:59.673 "params": { 00:24:59.673 "max_subsystems": 1024 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "nvmf_set_crdt", 00:24:59.673 "params": { 00:24:59.673 "crdt1": 0, 00:24:59.673 "crdt2": 0, 00:24:59.673 "crdt3": 0 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "nvmf_create_transport", 00:24:59.673 "params": { 00:24:59.673 "abort_timeout_sec": 1, 00:24:59.673 "ack_timeout": 0, 00:24:59.673 "buf_cache_size": 4294967295, 00:24:59.673 "c2h_success": false, 00:24:59.673 "dif_insert_or_strip": false, 00:24:59.673 "in_capsule_data_size": 4096, 00:24:59.673 "io_unit_size": 131072, 00:24:59.673 "max_aq_depth": 128, 00:24:59.673 "max_io_qpairs_per_ctrlr": 127, 00:24:59.673 "max_io_size": 131072, 00:24:59.673 "max_queue_depth": 128, 00:24:59.673 "num_shared_buffers": 511, 00:24:59.673 "sock_priority": 0, 00:24:59.673 "trtype": "TCP", 00:24:59.673 "zcopy": false 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "nvmf_create_subsystem", 00:24:59.673 "params": { 00:24:59.673 "allow_any_host": false, 00:24:59.673 "ana_reporting": false, 00:24:59.673 "max_cntlid": 65519, 00:24:59.673 "max_namespaces": 10, 00:24:59.673 "min_cntlid": 1, 00:24:59.673 "model_number": "SPDK bdev Controller", 00:24:59.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.673 "serial_number": "SPDK00000000000001" 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "nvmf_subsystem_add_host", 00:24:59.673 "params": { 00:24:59.673 "host": "nqn.2016-06.io.spdk:host1", 00:24:59.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.673 "psk": "/tmp/tmp.crRmH0pMpj" 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "nvmf_subsystem_add_ns", 00:24:59.673 "params": { 00:24:59.673 "namespace": { 00:24:59.673 "bdev_name": "malloc0", 00:24:59.673 "nguid": "3E1E73CC1D054670BBB747B3D5B67D19", 00:24:59.673 "no_auto_visible": false, 00:24:59.673 "nsid": 1, 00:24:59.673 "uuid": "3e1e73cc-1d05-4670-bbb7-47b3d5b67d19" 00:24:59.673 }, 00:24:59.673 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:59.673 } 00:24:59.673 }, 00:24:59.673 { 00:24:59.673 "method": "nvmf_subsystem_add_listener", 00:24:59.673 "params": { 00:24:59.673 "listen_address": { 00:24:59.673 "adrfam": "IPv4", 00:24:59.673 "traddr": "10.0.0.2", 00:24:59.673 "trsvcid": "4420", 00:24:59.673 "trtype": "TCP" 00:24:59.673 }, 00:24:59.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.673 "secure_channel": true 00:24:59.673 } 00:24:59.673 } 00:24:59.673 ] 00:24:59.673 } 00:24:59.673 ] 00:24:59.673 }' 00:24:59.673 11:12:28 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:59.931 11:12:28 -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:59.931 "subsystems": [ 00:24:59.931 { 00:24:59.931 "subsystem": "keyring", 00:24:59.931 "config": [] 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "subsystem": "iobuf", 00:24:59.931 "config": [ 00:24:59.931 { 00:24:59.931 "method": "iobuf_set_options", 00:24:59.931 "params": { 00:24:59.931 "large_bufsize": 135168, 00:24:59.931 "large_pool_count": 1024, 00:24:59.931 "small_bufsize": 8192, 00:24:59.931 "small_pool_count": 8192 00:24:59.931 } 00:24:59.931 } 00:24:59.931 ] 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "subsystem": "sock", 00:24:59.931 "config": [ 00:24:59.931 { 00:24:59.931 "method": "sock_impl_set_options", 00:24:59.931 "params": { 00:24:59.931 "enable_ktls": false, 00:24:59.931 "enable_placement_id": 0, 00:24:59.931 "enable_quickack": false, 00:24:59.931 "enable_recv_pipe": true, 00:24:59.931 "enable_zerocopy_send_client": false, 00:24:59.931 "enable_zerocopy_send_server": true, 00:24:59.931 "impl_name": "posix", 00:24:59.931 "recv_buf_size": 2097152, 00:24:59.931 "send_buf_size": 2097152, 00:24:59.931 "tls_version": 0, 00:24:59.931 "zerocopy_threshold": 0 00:24:59.931 } 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "method": "sock_impl_set_options", 00:24:59.931 "params": { 00:24:59.931 "enable_ktls": false, 00:24:59.931 "enable_placement_id": 0, 00:24:59.931 "enable_quickack": false, 00:24:59.931 "enable_recv_pipe": true, 00:24:59.931 "enable_zerocopy_send_client": false, 00:24:59.931 "enable_zerocopy_send_server": true, 00:24:59.931 "impl_name": "ssl", 00:24:59.931 "recv_buf_size": 4096, 00:24:59.931 "send_buf_size": 4096, 00:24:59.931 "tls_version": 0, 00:24:59.931 "zerocopy_threshold": 0 00:24:59.931 } 00:24:59.931 } 00:24:59.931 ] 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "subsystem": "vmd", 00:24:59.931 "config": [] 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "subsystem": "accel", 00:24:59.931 "config": [ 00:24:59.931 { 00:24:59.931 "method": "accel_set_options", 00:24:59.931 "params": { 00:24:59.931 "buf_count": 2048, 00:24:59.931 "large_cache_size": 16, 00:24:59.931 "sequence_count": 2048, 00:24:59.931 "small_cache_size": 128, 00:24:59.931 "task_count": 2048 00:24:59.931 } 00:24:59.931 } 00:24:59.931 ] 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "subsystem": "bdev", 00:24:59.931 "config": [ 00:24:59.931 { 00:24:59.931 "method": "bdev_set_options", 00:24:59.931 "params": { 00:24:59.931 "bdev_auto_examine": true, 00:24:59.931 "bdev_io_cache_size": 256, 00:24:59.931 "bdev_io_pool_size": 65535, 00:24:59.931 "iobuf_large_cache_size": 16, 00:24:59.931 "iobuf_small_cache_size": 128 00:24:59.931 } 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "method": "bdev_raid_set_options", 00:24:59.931 "params": { 00:24:59.931 "process_window_size_kb": 1024 00:24:59.931 } 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "method": "bdev_iscsi_set_options", 00:24:59.931 "params": { 00:24:59.931 "timeout_sec": 30 00:24:59.931 } 00:24:59.931 }, 00:24:59.931 { 00:24:59.931 "method": "bdev_nvme_set_options", 00:24:59.931 "params": { 00:24:59.932 "action_on_timeout": "none", 00:24:59.932 "allow_accel_sequence": false, 00:24:59.932 "arbitration_burst": 0, 00:24:59.932 "bdev_retry_count": 3, 00:24:59.932 "ctrlr_loss_timeout_sec": 0, 00:24:59.932 "delay_cmd_submit": true, 00:24:59.932 "dhchap_dhgroups": [ 00:24:59.932 "null", 00:24:59.932 "ffdhe2048", 00:24:59.932 "ffdhe3072", 00:24:59.932 "ffdhe4096", 00:24:59.932 "ffdhe6144", 00:24:59.932 "ffdhe8192" 00:24:59.932 ], 00:24:59.932 "dhchap_digests": [ 00:24:59.932 "sha256", 00:24:59.932 "sha384", 00:24:59.932 "sha512" 00:24:59.932 ], 00:24:59.932 "disable_auto_failback": false, 00:24:59.932 "fast_io_fail_timeout_sec": 0, 00:24:59.932 "generate_uuids": false, 00:24:59.932 "high_priority_weight": 0, 00:24:59.932 "io_path_stat": false, 00:24:59.932 "io_queue_requests": 512, 00:24:59.932 "keep_alive_timeout_ms": 10000, 00:24:59.932 "low_priority_weight": 0, 00:24:59.932 "medium_priority_weight": 0, 00:24:59.932 "nvme_adminq_poll_period_us": 10000, 00:24:59.932 "nvme_error_stat": false, 00:24:59.932 "nvme_ioq_poll_period_us": 0, 00:24:59.932 "rdma_cm_event_timeout_ms": 0, 00:24:59.932 "rdma_max_cq_size": 0, 00:24:59.932 "rdma_srq_size": 0, 00:24:59.932 "reconnect_delay_sec": 0, 00:24:59.932 "timeout_admin_us": 0, 00:24:59.932 "timeout_us": 0, 00:24:59.932 "transport_ack_timeout": 0, 00:24:59.932 "transport_retry_count": 4, 00:24:59.932 "transport_tos": 0 00:24:59.932 } 00:24:59.932 }, 00:24:59.932 { 00:24:59.932 "method": "bdev_nvme_attach_controller", 00:24:59.932 "params": { 00:24:59.932 "adrfam": "IPv4", 00:24:59.932 "ctrlr_loss_timeout_sec": 0, 00:24:59.932 "ddgst": false, 00:24:59.932 "fast_io_fail_timeout_sec": 0, 00:24:59.932 "hdgst": false, 00:24:59.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:59.932 "name": "TLSTEST", 00:24:59.932 "prchk_guard": false, 00:24:59.932 "prchk_reftag": false, 00:24:59.932 "psk": "/tmp/tmp.crRmH0pMpj", 00:24:59.932 "reconnect_delay_sec": 0, 00:24:59.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.932 "traddr": "10.0.0.2", 00:24:59.932 "trsvcid": "4420", 00:24:59.932 "trtype": "TCP" 00:24:59.932 } 00:24:59.932 }, 00:24:59.932 { 00:24:59.932 "method": "bdev_nvme_set_hotplug", 00:24:59.932 "params": { 00:24:59.932 "enable": false, 00:24:59.932 "period_us": 100000 00:24:59.932 } 00:24:59.932 }, 00:24:59.932 { 00:24:59.932 "method": "bdev_wait_for_examine" 00:24:59.932 } 00:24:59.932 ] 00:24:59.932 }, 00:24:59.932 { 00:24:59.932 "subsystem": "nbd", 00:24:59.932 "config": [] 00:24:59.932 } 00:24:59.932 ] 00:24:59.932 }' 00:24:59.932 11:12:28 -- target/tls.sh@199 -- # killprocess 94556 00:24:59.932 11:12:28 -- common/autotest_common.sh@936 -- # '[' -z 94556 ']' 00:24:59.932 11:12:28 -- common/autotest_common.sh@940 -- # kill -0 94556 00:24:59.932 11:12:28 -- common/autotest_common.sh@941 -- # uname 00:24:59.932 11:12:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:59.932 11:12:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94556 00:25:00.190 11:12:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:00.190 11:12:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:00.190 11:12:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94556' 00:25:00.190 killing process with pid 94556 00:25:00.190 11:12:28 -- common/autotest_common.sh@955 -- # kill 94556 00:25:00.190 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.190 00:25:00.190 Latency(us) 00:25:00.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.190 =================================================================================================================== 00:25:00.190 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:00.190 [2024-04-18 11:12:28.595457] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:00.190 11:12:28 -- common/autotest_common.sh@960 -- # wait 94556 00:25:00.190 11:12:28 -- target/tls.sh@200 -- # killprocess 94458 00:25:00.190 11:12:28 -- common/autotest_common.sh@936 -- # '[' -z 94458 ']' 00:25:00.190 11:12:28 -- common/autotest_common.sh@940 -- # kill -0 94458 00:25:00.190 11:12:28 -- common/autotest_common.sh@941 -- # uname 00:25:00.190 11:12:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.190 11:12:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94458 00:25:00.448 11:12:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:00.448 11:12:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:00.448 killing process with pid 94458 00:25:00.448 11:12:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94458' 00:25:00.448 11:12:28 -- common/autotest_common.sh@955 -- # kill 94458 00:25:00.448 [2024-04-18 11:12:28.832426] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:00.448 11:12:28 -- common/autotest_common.sh@960 -- # wait 94458 00:25:00.448 11:12:29 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:00.448 11:12:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:00.448 11:12:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:00.448 11:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:00.448 11:12:29 -- target/tls.sh@203 -- # echo '{ 00:25:00.448 "subsystems": [ 00:25:00.448 { 00:25:00.448 "subsystem": "keyring", 00:25:00.448 "config": [] 00:25:00.448 }, 00:25:00.448 { 00:25:00.448 "subsystem": "iobuf", 00:25:00.448 "config": [ 00:25:00.448 { 00:25:00.448 "method": "iobuf_set_options", 00:25:00.448 "params": { 00:25:00.448 "large_bufsize": 135168, 00:25:00.448 "large_pool_count": 1024, 00:25:00.448 "small_bufsize": 8192, 00:25:00.448 "small_pool_count": 8192 00:25:00.448 } 00:25:00.448 } 00:25:00.448 ] 00:25:00.448 }, 00:25:00.448 { 00:25:00.448 "subsystem": "sock", 00:25:00.448 "config": [ 00:25:00.448 { 00:25:00.448 "method": "sock_impl_set_options", 00:25:00.448 "params": { 00:25:00.448 "enable_ktls": false, 00:25:00.448 "enable_placement_id": 0, 00:25:00.448 "enable_quickack": false, 00:25:00.448 "enable_recv_pipe": true, 00:25:00.448 "enable_zerocopy_send_client": false, 00:25:00.448 "enable_zerocopy_send_server": true, 00:25:00.448 "impl_name": "posix", 00:25:00.448 "recv_buf_size": 2097152, 00:25:00.448 "send_buf_size": 2097152, 00:25:00.448 "tls_version": 0, 00:25:00.448 "zerocopy_threshold": 0 00:25:00.448 } 00:25:00.448 }, 00:25:00.448 { 00:25:00.448 "method": "sock_impl_set_options", 00:25:00.449 "params": { 00:25:00.449 "enable_ktls": false, 00:25:00.449 "enable_placement_id": 0, 00:25:00.449 "enable_quickack": false, 00:25:00.449 "enable_recv_pipe": true, 00:25:00.449 "enable_zerocopy_send_client": false, 00:25:00.449 "enable_zerocopy_send_server": true, 00:25:00.449 "impl_name": "ssl", 00:25:00.449 "recv_buf_size": 4096, 00:25:00.449 "send_buf_size": 4096, 00:25:00.449 "tls_version": 0, 00:25:00.449 "zerocopy_threshold": 0 00:25:00.449 } 00:25:00.449 } 00:25:00.449 ] 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "subsystem": "vmd", 00:25:00.449 "config": [] 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "subsystem": "accel", 00:25:00.449 "config": [ 00:25:00.449 { 00:25:00.449 "method": "accel_set_options", 00:25:00.449 "params": { 00:25:00.449 "buf_count": 2048, 00:25:00.449 "large_cache_size": 16, 00:25:00.449 "sequence_count": 2048, 00:25:00.449 "small_cache_size": 128, 00:25:00.449 "task_count": 2048 00:25:00.449 } 00:25:00.449 } 00:25:00.449 ] 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "subsystem": "bdev", 00:25:00.449 "config": [ 00:25:00.449 { 00:25:00.449 "method": "bdev_set_options", 00:25:00.449 "params": { 00:25:00.449 "bdev_auto_examine": true, 00:25:00.449 "bdev_io_cache_size": 256, 00:25:00.449 "bdev_io_pool_size": 65535, 00:25:00.449 "iobuf_large_cache_size": 16, 00:25:00.449 "iobuf_small_cache_size": 128 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "bdev_raid_set_options", 00:25:00.449 "params": { 00:25:00.449 "process_window_size_kb": 1024 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "bdev_iscsi_set_options", 00:25:00.449 "params": { 00:25:00.449 "timeout_sec": 30 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "bdev_nvme_set_options", 00:25:00.449 "params": { 00:25:00.449 "action_on_timeout": "none", 00:25:00.449 "allow_accel_sequence": false, 00:25:00.449 "arbitration_burst": 0, 00:25:00.449 "bdev_retry_count": 3, 00:25:00.449 "ctrlr_loss_timeout_sec": 0, 00:25:00.449 "delay_cmd_submit": true, 00:25:00.449 "dhchap_dhgroups": [ 00:25:00.449 "null", 00:25:00.449 "ffdhe2048", 00:25:00.449 "ffdhe3072", 00:25:00.449 "ffdhe4096", 00:25:00.449 "ffdhe6144", 00:25:00.449 "ffdhe8192" 00:25:00.449 ], 00:25:00.449 "dhchap_digests": [ 00:25:00.449 "sha256", 00:25:00.449 "sha384", 00:25:00.449 "sha512" 00:25:00.449 ], 00:25:00.449 "disable_auto_failback": false, 00:25:00.449 "fast_io_fail_timeout_sec": 0, 00:25:00.449 "generate_uuids": false, 00:25:00.449 "high_priority_weight": 0, 00:25:00.449 "io_path_stat": false, 00:25:00.449 "io_queue_requests": 0, 00:25:00.449 "keep_alive_timeout_ms": 10000, 00:25:00.449 "low_priority_weight": 0, 00:25:00.449 "medium_priority_weight": 0, 00:25:00.449 "nvme_adminq_poll_period_us": 10000, 00:25:00.449 "nvme_error_stat": false, 00:25:00.449 "nvme_ioq_poll_period_us": 0, 00:25:00.449 "rdma_cm_event_timeout_ms": 0, 00:25:00.449 "rdma_max_cq_size": 0, 00:25:00.449 "rdma_srq_size": 0, 00:25:00.449 "reconnect_delay_sec": 0, 00:25:00.449 "timeout_admin_us": 0, 00:25:00.449 "timeout_us": 0, 00:25:00.449 "transport_ack_timeout": 0, 00:25:00.449 "transport_retry_count": 4, 00:25:00.449 "transport_tos": 0 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "bdev_nvme_set_hotplug", 00:25:00.449 "params": { 00:25:00.449 "enable": false, 00:25:00.449 "period_us": 100000 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "bdev_malloc_create", 00:25:00.449 "params": { 00:25:00.449 "block_size": 4096, 00:25:00.449 "name": "malloc0", 00:25:00.449 "num_blocks": 8192, 00:25:00.449 "optimal_io_boundary": 0, 00:25:00.449 "physical_block_size": 4096, 00:25:00.449 "uuid": "3e1e73cc-1d05-4670-bbb7-47b3d5b67d19" 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "bdev_wait_for_examine" 00:25:00.449 } 00:25:00.449 ] 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "subsystem": "nbd", 00:25:00.449 "config": [] 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "subsystem": "scheduler", 00:25:00.449 "config": [ 00:25:00.449 { 00:25:00.449 "method": "framework_set_scheduler", 00:25:00.449 "params": { 00:25:00.449 "name": "static" 00:25:00.449 } 00:25:00.449 } 00:25:00.449 ] 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "subsystem": "nvmf", 00:25:00.449 "config": [ 00:25:00.449 { 00:25:00.449 "method": "nvmf_set_config", 00:25:00.449 "params": { 00:25:00.449 "admin_cmd_passthru": { 00:25:00.449 "identify_ctrlr": false 00:25:00.449 }, 00:25:00.449 "discovery_filter": "match_any" 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "nvmf_set_max_subsystems", 00:25:00.449 "params": { 00:25:00.449 "max_subsystems": 1024 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "nvmf_set_crdt", 00:25:00.449 "params": { 00:25:00.449 "crdt1": 0, 00:25:00.449 "crdt2": 0, 00:25:00.449 "crdt3": 0 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "nvmf_create_transport", 00:25:00.449 "params": { 00:25:00.449 "abort_timeout_sec": 1, 00:25:00.449 "ack_timeout": 0, 00:25:00.449 "buf_cache_size": 4294967295, 00:25:00.449 "c2h_success": false, 00:25:00.449 "dif_insert_or_strip": false, 00:25:00.449 "in_capsule_data_size": 4096, 00:25:00.449 "io_unit_size": 131072, 00:25:00.449 "max_aq_depth": 128, 00:25:00.449 "max_io_qpairs_per_ctrlr": 127, 00:25:00.449 "max_io_size": 131072, 00:25:00.449 "max_queue_depth": 128, 00:25:00.449 "num_shared_buffers": 511, 00:25:00.449 "sock_priority": 0, 00:25:00.449 "trtype": "TCP", 00:25:00.449 "zcopy": false 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "nvmf_create_subsystem", 00:25:00.449 "params": { 00:25:00.449 "allow_any_host": false, 00:25:00.449 "ana_reporting": false, 00:25:00.449 "max_cntlid": 65519, 00:25:00.449 "max_namespaces": 10, 00:25:00.449 "min_cntlid": 1, 00:25:00.449 "model_number": "SPDK bdev Controller", 00:25:00.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.449 "serial_number": "SPDK00000000000001" 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "nvmf_subsystem_add_host", 00:25:00.449 "params": { 00:25:00.449 "host": "nqn.2016-06.io.spdk:host1", 00:25:00.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.449 "psk": "/tmp/tmp.crRmH0pMpj" 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "nvmf_subsystem_add_ns", 00:25:00.449 "params": { 00:25:00.449 "namespace": { 00:25:00.449 "bdev_name": "malloc0", 00:25:00.449 "nguid": "3E1E73CC1D054670BBB747B3D5B67D19", 00:25:00.449 "no_auto_visible": false, 00:25:00.449 "nsid": 1, 00:25:00.449 "uuid": "3e1e73cc-1d05-4670-bbb7-47b3d5b67d19" 00:25:00.449 }, 00:25:00.449 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:00.449 } 00:25:00.449 }, 00:25:00.449 { 00:25:00.449 "method": "nvmf_subsystem_add_listener", 00:25:00.449 "params": { 00:25:00.449 "listen_address": { 00:25:00.449 "adrfam": "IPv4", 00:25:00.449 "traddr": "10.0.0.2", 00:25:00.449 "trsvcid": "4420", 00:25:00.449 "trtype": "TCP" 00:25:00.449 }, 00:25:00.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.449 "secure_channel": true 00:25:00.449 } 00:25:00.449 } 00:25:00.449 ] 00:25:00.449 } 00:25:00.449 ] 00:25:00.449 }' 00:25:00.449 11:12:29 -- nvmf/common.sh@470 -- # nvmfpid=94635 00:25:00.449 11:12:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:00.449 11:12:29 -- nvmf/common.sh@471 -- # waitforlisten 94635 00:25:00.449 11:12:29 -- common/autotest_common.sh@817 -- # '[' -z 94635 ']' 00:25:00.449 11:12:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.449 11:12:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.450 11:12:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.450 11:12:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.450 11:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:00.708 [2024-04-18 11:12:29.128637] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:00.708 [2024-04-18 11:12:29.128733] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.708 [2024-04-18 11:12:29.268242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.967 [2024-04-18 11:12:29.363389] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.967 [2024-04-18 11:12:29.363442] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.967 [2024-04-18 11:12:29.363454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.967 [2024-04-18 11:12:29.363463] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.967 [2024-04-18 11:12:29.363470] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.967 [2024-04-18 11:12:29.363557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.967 [2024-04-18 11:12:29.586070] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.967 [2024-04-18 11:12:29.602011] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:01.225 [2024-04-18 11:12:29.617991] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.225 [2024-04-18 11:12:29.618193] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.793 11:12:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:01.793 11:12:30 -- common/autotest_common.sh@850 -- # return 0 00:25:01.793 11:12:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:01.793 11:12:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:01.793 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:25:01.793 11:12:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.793 11:12:30 -- target/tls.sh@207 -- # bdevperf_pid=94679 00:25:01.793 11:12:30 -- target/tls.sh@208 -- # waitforlisten 94679 /var/tmp/bdevperf.sock 00:25:01.793 11:12:30 -- common/autotest_common.sh@817 -- # '[' -z 94679 ']' 00:25:01.793 11:12:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.793 11:12:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:01.793 11:12:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.793 11:12:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:01.793 11:12:30 -- target/tls.sh@204 -- # echo '{ 00:25:01.793 "subsystems": [ 00:25:01.793 { 00:25:01.793 "subsystem": "keyring", 00:25:01.793 "config": [] 00:25:01.793 }, 00:25:01.793 { 00:25:01.793 "subsystem": "iobuf", 00:25:01.793 "config": [ 00:25:01.793 { 00:25:01.793 "method": "iobuf_set_options", 00:25:01.793 "params": { 00:25:01.793 "large_bufsize": 135168, 00:25:01.793 "large_pool_count": 1024, 00:25:01.793 "small_bufsize": 8192, 00:25:01.793 "small_pool_count": 8192 00:25:01.793 } 00:25:01.793 } 00:25:01.793 ] 00:25:01.793 }, 00:25:01.793 { 00:25:01.793 "subsystem": "sock", 00:25:01.793 "config": [ 00:25:01.793 { 00:25:01.793 "method": "sock_impl_set_options", 00:25:01.793 "params": { 00:25:01.793 "enable_ktls": false, 00:25:01.793 "enable_placement_id": 0, 00:25:01.793 "enable_quickack": false, 00:25:01.793 "enable_recv_pipe": true, 00:25:01.793 "enable_zerocopy_send_client": false, 00:25:01.793 "enable_zerocopy_send_server": true, 00:25:01.793 "impl_name": "posix", 00:25:01.793 "recv_buf_size": 2097152, 00:25:01.793 "send_buf_size": 2097152, 00:25:01.793 "tls_version": 0, 00:25:01.793 "zerocopy_threshold": 0 00:25:01.793 } 00:25:01.793 }, 00:25:01.793 { 00:25:01.793 "method": "sock_impl_set_options", 00:25:01.793 "params": { 00:25:01.793 "enable_ktls": false, 00:25:01.793 "enable_placement_id": 0, 00:25:01.793 "enable_quickack": false, 00:25:01.793 "enable_recv_pipe": true, 00:25:01.793 "enable_zerocopy_send_client": false, 00:25:01.793 "enable_zerocopy_send_server": true, 00:25:01.793 "impl_name": "ssl", 00:25:01.794 "recv_buf_size": 4096, 00:25:01.794 "send_buf_size": 4096, 00:25:01.794 "tls_version": 0, 00:25:01.794 "zerocopy_threshold": 0 00:25:01.794 } 00:25:01.794 } 00:25:01.794 ] 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "subsystem": "vmd", 00:25:01.794 "config": [] 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "subsystem": "accel", 00:25:01.794 "config": [ 00:25:01.794 { 00:25:01.794 "method": "accel_set_options", 00:25:01.794 "params": { 00:25:01.794 "buf_count": 2048, 00:25:01.794 "large_cache_size": 16, 00:25:01.794 "sequence_count": 2048, 00:25:01.794 "small_cache_size": 128, 00:25:01.794 "task_count": 2048 00:25:01.794 } 00:25:01.794 } 00:25:01.794 ] 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "subsystem": "bdev", 00:25:01.794 "config": [ 00:25:01.794 { 00:25:01.794 "method": "bdev_set_options", 00:25:01.794 "params": { 00:25:01.794 "bdev_auto_examine": true, 00:25:01.794 "bdev_io_cache_size": 256, 00:25:01.794 "bdev_io_pool_size": 65535, 00:25:01.794 "iobuf_large_cache_size": 16, 00:25:01.794 "iobuf_small_cache_size": 128 00:25:01.794 } 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "method": "bdev_raid_set_options", 00:25:01.794 "params": { 00:25:01.794 "process_window_size_kb": 1024 00:25:01.794 } 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "method": "bdev_iscsi_set_options", 00:25:01.794 "params": { 00:25:01.794 "timeout_sec": 30 00:25:01.794 } 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "method": "bdev_nvme_set_options", 00:25:01.794 "params": { 00:25:01.794 "action_on_timeout": "none", 00:25:01.794 "allow_accel_sequence": false, 00:25:01.794 "arbitration_burst": 0, 00:25:01.794 "bdev_retry_count": 3, 00:25:01.794 "ctrlr_loss_timeout_sec": 0, 00:25:01.794 "delay_cmd_submit": true, 00:25:01.794 "dhchap_dhgroups": [ 00:25:01.794 "null", 00:25:01.794 "ffdhe2048", 00:25:01.794 "ffdhe3072", 00:25:01.794 "ffdhe4096", 00:25:01.794 "ffdhe6144", 00:25:01.794 "ffdhe8192" 00:25:01.794 ], 00:25:01.794 "dhchap_digests": [ 00:25:01.794 "sha256", 00:25:01.794 "sha384", 00:25:01.794 "sha512" 00:25:01.794 ], 00:25:01.794 "disable_auto_failback": false, 00:25:01.794 "fast_io_fail_timeout_sec": 0, 00:25:01.794 "generate_uuids": false, 00:25:01.794 "high_priority_weight": 0, 00:25:01.794 "io_path_stat": false, 00:25:01.794 "io_queue_requests": 512, 00:25:01.794 "keep_alive_timeout_ms": 10000, 00:25:01.794 "low_priority_weight": 0, 00:25:01.794 "medium_priority_weight": 0, 00:25:01.794 "nvme_adminq_poll_period_us": 10000, 00:25:01.794 "nvme_error_stat": false, 00:25:01.794 "nvme_ioq_poll_period_us": 0, 00:25:01.794 "rdma_cm_event_timeout_ms": 0, 00:25:01.794 "rdma_max_cq_size": 0, 00:25:01.794 "rdma_srq_size": 0, 00:25:01.794 "reconnect_delay_sec": 0, 00:25:01.794 "timeout_admin_us": 0, 00:25:01.794 "timeout_us": 0, 00:25:01.794 "transport_ack_timeout": 0, 00:25:01.794 "transport_retry_count": 4, 00:25:01.794 "transport_tos": 0 00:25:01.794 } 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "method": "bdev_nvme_attach_controller", 00:25:01.794 "params": { 00:25:01.794 "adrfam": "IPv4", 00:25:01.794 "ctrlr_loss_timeout_sec": 0, 00:25:01.794 "ddgst": false, 00:25:01.794 "fast_io_fail_timeout_sec": 0, 00:25:01.794 "hdgst": false, 00:25:01.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.794 "name": "TLSTEST", 00:25:01.794 "prchk_guard": false, 00:25:01.794 "prchk_reftag": false, 00:25:01.794 "psk": "/tmp/tmp.crRmH0pMpj", 00:25:01.794 "reconnect_delay_sec": 0, 00:25:01.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.794 "traddr": "10.0.0.2", 00:25:01.794 "trsvcid": "4420", 00:25:01.794 "trtype": "TCP" 00:25:01.794 } 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "method": "bdev_nvme_set_hotplug", 00:25:01.794 "params": { 00:25:01.794 "enable": false, 00:25:01.794 "period_us": 100000 00:25:01.794 } 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "method": "bdev_wait_for_examine" 00:25:01.794 } 00:25:01.794 ] 00:25:01.794 }, 00:25:01.794 { 00:25:01.794 "subsystem": "nbd", 00:25:01.794 "config": [] 00:25:01.794 } 00:25:01.794 ] 00:25:01.794 }' 00:25:01.794 11:12:30 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:01.794 11:12:30 -- common/autotest_common.sh@10 -- # set +x 00:25:01.794 [2024-04-18 11:12:30.229776] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:01.794 [2024-04-18 11:12:30.230157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94679 ] 00:25:01.794 [2024-04-18 11:12:30.368496] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.053 [2024-04-18 11:12:30.463011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.053 [2024-04-18 11:12:30.618278] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.053 [2024-04-18 11:12:30.618660] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:02.620 11:12:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:02.620 11:12:31 -- common/autotest_common.sh@850 -- # return 0 00:25:02.620 11:12:31 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:02.879 Running I/O for 10 seconds... 00:25:12.855 00:25:12.855 Latency(us) 00:25:12.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.855 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:12.855 Verification LBA range: start 0x0 length 0x2000 00:25:12.855 TLSTESTn1 : 10.03 3837.44 14.99 0.00 0.00 33277.90 7357.91 21448.15 00:25:12.855 =================================================================================================================== 00:25:12.855 Total : 3837.44 14.99 0.00 0.00 33277.90 7357.91 21448.15 00:25:12.855 0 00:25:12.855 11:12:41 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:12.855 11:12:41 -- target/tls.sh@214 -- # killprocess 94679 00:25:12.855 11:12:41 -- common/autotest_common.sh@936 -- # '[' -z 94679 ']' 00:25:12.855 11:12:41 -- common/autotest_common.sh@940 -- # kill -0 94679 00:25:12.855 11:12:41 -- common/autotest_common.sh@941 -- # uname 00:25:12.855 11:12:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.855 11:12:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94679 00:25:12.855 11:12:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:12.855 11:12:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:12.855 11:12:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94679' 00:25:12.855 killing process with pid 94679 00:25:12.855 Received shutdown signal, test time was about 10.000000 seconds 00:25:12.855 00:25:12.855 Latency(us) 00:25:12.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.855 =================================================================================================================== 00:25:12.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.855 11:12:41 -- common/autotest_common.sh@955 -- # kill 94679 00:25:12.855 [2024-04-18 11:12:41.398977] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:12.855 11:12:41 -- common/autotest_common.sh@960 -- # wait 94679 00:25:13.112 11:12:41 -- target/tls.sh@215 -- # killprocess 94635 00:25:13.112 11:12:41 -- common/autotest_common.sh@936 -- # '[' -z 94635 ']' 00:25:13.112 11:12:41 -- common/autotest_common.sh@940 -- # kill -0 94635 00:25:13.112 11:12:41 -- common/autotest_common.sh@941 -- # uname 00:25:13.112 11:12:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:13.112 11:12:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94635 00:25:13.112 11:12:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:13.112 killing process with pid 94635 00:25:13.112 11:12:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:13.112 11:12:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94635' 00:25:13.112 11:12:41 -- common/autotest_common.sh@955 -- # kill 94635 00:25:13.112 [2024-04-18 11:12:41.641416] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:13.112 11:12:41 -- common/autotest_common.sh@960 -- # wait 94635 00:25:13.371 11:12:41 -- target/tls.sh@218 -- # nvmfappstart 00:25:13.371 11:12:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:13.371 11:12:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:13.371 11:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:13.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.371 11:12:41 -- nvmf/common.sh@470 -- # nvmfpid=94825 00:25:13.371 11:12:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:13.371 11:12:41 -- nvmf/common.sh@471 -- # waitforlisten 94825 00:25:13.371 11:12:41 -- common/autotest_common.sh@817 -- # '[' -z 94825 ']' 00:25:13.371 11:12:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.371 11:12:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:13.371 11:12:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.371 11:12:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:13.371 11:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:13.371 [2024-04-18 11:12:41.922182] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:13.371 [2024-04-18 11:12:41.922568] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.628 [2024-04-18 11:12:42.062968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.628 [2024-04-18 11:12:42.166768] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.628 [2024-04-18 11:12:42.166833] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.628 [2024-04-18 11:12:42.166848] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.628 [2024-04-18 11:12:42.166859] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.628 [2024-04-18 11:12:42.166868] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.628 [2024-04-18 11:12:42.166903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.561 11:12:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:14.561 11:12:42 -- common/autotest_common.sh@850 -- # return 0 00:25:14.561 11:12:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:14.561 11:12:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:14.561 11:12:42 -- common/autotest_common.sh@10 -- # set +x 00:25:14.561 11:12:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.561 11:12:42 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.crRmH0pMpj 00:25:14.561 11:12:42 -- target/tls.sh@49 -- # local key=/tmp/tmp.crRmH0pMpj 00:25:14.561 11:12:42 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:14.818 [2024-04-18 11:12:43.241606] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.818 11:12:43 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:15.076 11:12:43 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:15.353 [2024-04-18 11:12:43.753701] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:15.353 [2024-04-18 11:12:43.753935] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.353 11:12:43 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:15.610 malloc0 00:25:15.610 11:12:44 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:15.868 11:12:44 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.crRmH0pMpj 00:25:15.868 [2024-04-18 11:12:44.501356] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:16.125 11:12:44 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:16.125 11:12:44 -- target/tls.sh@222 -- # bdevperf_pid=94928 00:25:16.125 11:12:44 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:16.125 11:12:44 -- target/tls.sh@225 -- # waitforlisten 94928 /var/tmp/bdevperf.sock 00:25:16.125 11:12:44 -- common/autotest_common.sh@817 -- # '[' -z 94928 ']' 00:25:16.125 11:12:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.125 11:12:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:16.125 11:12:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.125 11:12:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:16.125 11:12:44 -- common/autotest_common.sh@10 -- # set +x 00:25:16.125 [2024-04-18 11:12:44.579890] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:16.125 [2024-04-18 11:12:44.579998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94928 ] 00:25:16.125 [2024-04-18 11:12:44.719167] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.382 [2024-04-18 11:12:44.820004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.947 11:12:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.947 11:12:45 -- common/autotest_common.sh@850 -- # return 0 00:25:16.947 11:12:45 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.crRmH0pMpj 00:25:17.205 11:12:45 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:17.462 [2024-04-18 11:12:46.068516] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:17.719 nvme0n1 00:25:17.719 11:12:46 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.719 Running I/O for 1 seconds... 00:25:18.652 00:25:18.652 Latency(us) 00:25:18.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.652 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:18.652 Verification LBA range: start 0x0 length 0x2000 00:25:18.652 nvme0n1 : 1.03 3675.81 14.36 0.00 0.00 34288.40 7000.44 20852.36 00:25:18.652 =================================================================================================================== 00:25:18.652 Total : 3675.81 14.36 0.00 0.00 34288.40 7000.44 20852.36 00:25:18.652 0 00:25:18.910 11:12:47 -- target/tls.sh@234 -- # killprocess 94928 00:25:18.910 11:12:47 -- common/autotest_common.sh@936 -- # '[' -z 94928 ']' 00:25:18.910 11:12:47 -- common/autotest_common.sh@940 -- # kill -0 94928 00:25:18.910 11:12:47 -- common/autotest_common.sh@941 -- # uname 00:25:18.910 11:12:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.910 11:12:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94928 00:25:18.910 11:12:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:18.910 11:12:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:18.910 killing process with pid 94928 00:25:18.910 11:12:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94928' 00:25:18.910 Received shutdown signal, test time was about 1.000000 seconds 00:25:18.910 00:25:18.910 Latency(us) 00:25:18.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.910 =================================================================================================================== 00:25:18.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.910 11:12:47 -- common/autotest_common.sh@955 -- # kill 94928 00:25:18.910 11:12:47 -- common/autotest_common.sh@960 -- # wait 94928 00:25:18.910 11:12:47 -- target/tls.sh@235 -- # killprocess 94825 00:25:18.910 11:12:47 -- common/autotest_common.sh@936 -- # '[' -z 94825 ']' 00:25:18.910 11:12:47 -- common/autotest_common.sh@940 -- # kill -0 94825 00:25:18.910 11:12:47 -- common/autotest_common.sh@941 -- # uname 00:25:18.910 11:12:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.910 11:12:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94825 00:25:19.167 11:12:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.167 11:12:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.167 killing process with pid 94825 00:25:19.167 11:12:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94825' 00:25:19.167 11:12:47 -- common/autotest_common.sh@955 -- # kill 94825 00:25:19.168 [2024-04-18 11:12:47.559834] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:19.168 11:12:47 -- common/autotest_common.sh@960 -- # wait 94825 00:25:19.168 11:12:47 -- target/tls.sh@238 -- # nvmfappstart 00:25:19.168 11:12:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:19.168 11:12:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:19.168 11:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:19.168 11:12:47 -- nvmf/common.sh@470 -- # nvmfpid=95003 00:25:19.168 11:12:47 -- nvmf/common.sh@471 -- # waitforlisten 95003 00:25:19.168 11:12:47 -- common/autotest_common.sh@817 -- # '[' -z 95003 ']' 00:25:19.168 11:12:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.168 11:12:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:19.168 11:12:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.168 11:12:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:19.168 11:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:19.168 11:12:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:19.426 [2024-04-18 11:12:47.860999] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:19.426 [2024-04-18 11:12:47.861136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.426 [2024-04-18 11:12:48.010312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.684 [2024-04-18 11:12:48.098441] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.684 [2024-04-18 11:12:48.098529] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.684 [2024-04-18 11:12:48.098556] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.684 [2024-04-18 11:12:48.098564] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.684 [2024-04-18 11:12:48.098572] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.684 [2024-04-18 11:12:48.098607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.251 11:12:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:20.251 11:12:48 -- common/autotest_common.sh@850 -- # return 0 00:25:20.251 11:12:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:20.251 11:12:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:20.251 11:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.509 11:12:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.510 11:12:48 -- target/tls.sh@239 -- # rpc_cmd 00:25:20.510 11:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.510 11:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.510 [2024-04-18 11:12:48.940348] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.510 malloc0 00:25:20.510 [2024-04-18 11:12:48.972523] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:20.510 [2024-04-18 11:12:48.972761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.510 11:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.510 11:12:49 -- target/tls.sh@252 -- # bdevperf_pid=95053 00:25:20.510 11:12:49 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:20.510 11:12:49 -- target/tls.sh@254 -- # waitforlisten 95053 /var/tmp/bdevperf.sock 00:25:20.510 11:12:49 -- common/autotest_common.sh@817 -- # '[' -z 95053 ']' 00:25:20.510 11:12:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.510 11:12:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:20.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.510 11:12:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.510 11:12:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:20.510 11:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:20.510 [2024-04-18 11:12:49.058523] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:20.510 [2024-04-18 11:12:49.058632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95053 ] 00:25:20.768 [2024-04-18 11:12:49.199392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.768 [2024-04-18 11:12:49.302412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.703 11:12:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.703 11:12:50 -- common/autotest_common.sh@850 -- # return 0 00:25:21.703 11:12:50 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.crRmH0pMpj 00:25:21.703 11:12:50 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:21.961 [2024-04-18 11:12:50.566622] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:22.219 nvme0n1 00:25:22.219 11:12:50 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:22.219 Running I/O for 1 seconds... 00:25:23.592 00:25:23.592 Latency(us) 00:25:23.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.592 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:23.592 Verification LBA range: start 0x0 length 0x2000 00:25:23.592 nvme0n1 : 1.03 3693.00 14.43 0.00 0.00 34156.61 7119.59 24665.37 00:25:23.592 =================================================================================================================== 00:25:23.592 Total : 3693.00 14.43 0.00 0.00 34156.61 7119.59 24665.37 00:25:23.592 0 00:25:23.592 11:12:51 -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:23.592 11:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.592 11:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:23.592 11:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.593 11:12:51 -- target/tls.sh@263 -- # tgtcfg='{ 00:25:23.593 "subsystems": [ 00:25:23.593 { 00:25:23.593 "subsystem": "keyring", 00:25:23.593 "config": [ 00:25:23.593 { 00:25:23.593 "method": "keyring_file_add_key", 00:25:23.593 "params": { 00:25:23.593 "name": "key0", 00:25:23.593 "path": "/tmp/tmp.crRmH0pMpj" 00:25:23.593 } 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "iobuf", 00:25:23.593 "config": [ 00:25:23.593 { 00:25:23.593 "method": "iobuf_set_options", 00:25:23.593 "params": { 00:25:23.593 "large_bufsize": 135168, 00:25:23.593 "large_pool_count": 1024, 00:25:23.593 "small_bufsize": 8192, 00:25:23.593 "small_pool_count": 8192 00:25:23.593 } 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "sock", 00:25:23.593 "config": [ 00:25:23.593 { 00:25:23.593 "method": "sock_impl_set_options", 00:25:23.593 "params": { 00:25:23.593 "enable_ktls": false, 00:25:23.593 "enable_placement_id": 0, 00:25:23.593 "enable_quickack": false, 00:25:23.593 "enable_recv_pipe": true, 00:25:23.593 "enable_zerocopy_send_client": false, 00:25:23.593 "enable_zerocopy_send_server": true, 00:25:23.593 "impl_name": "posix", 00:25:23.593 "recv_buf_size": 2097152, 00:25:23.593 "send_buf_size": 2097152, 00:25:23.593 "tls_version": 0, 00:25:23.593 "zerocopy_threshold": 0 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "sock_impl_set_options", 00:25:23.593 "params": { 00:25:23.593 "enable_ktls": false, 00:25:23.593 "enable_placement_id": 0, 00:25:23.593 "enable_quickack": false, 00:25:23.593 "enable_recv_pipe": true, 00:25:23.593 "enable_zerocopy_send_client": false, 00:25:23.593 "enable_zerocopy_send_server": true, 00:25:23.593 "impl_name": "ssl", 00:25:23.593 "recv_buf_size": 4096, 00:25:23.593 "send_buf_size": 4096, 00:25:23.593 "tls_version": 0, 00:25:23.593 "zerocopy_threshold": 0 00:25:23.593 } 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "vmd", 00:25:23.593 "config": [] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "accel", 00:25:23.593 "config": [ 00:25:23.593 { 00:25:23.593 "method": "accel_set_options", 00:25:23.593 "params": { 00:25:23.593 "buf_count": 2048, 00:25:23.593 "large_cache_size": 16, 00:25:23.593 "sequence_count": 2048, 00:25:23.593 "small_cache_size": 128, 00:25:23.593 "task_count": 2048 00:25:23.593 } 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "bdev", 00:25:23.593 "config": [ 00:25:23.593 { 00:25:23.593 "method": "bdev_set_options", 00:25:23.593 "params": { 00:25:23.593 "bdev_auto_examine": true, 00:25:23.593 "bdev_io_cache_size": 256, 00:25:23.593 "bdev_io_pool_size": 65535, 00:25:23.593 "iobuf_large_cache_size": 16, 00:25:23.593 "iobuf_small_cache_size": 128 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "bdev_raid_set_options", 00:25:23.593 "params": { 00:25:23.593 "process_window_size_kb": 1024 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "bdev_iscsi_set_options", 00:25:23.593 "params": { 00:25:23.593 "timeout_sec": 30 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "bdev_nvme_set_options", 00:25:23.593 "params": { 00:25:23.593 "action_on_timeout": "none", 00:25:23.593 "allow_accel_sequence": false, 00:25:23.593 "arbitration_burst": 0, 00:25:23.593 "bdev_retry_count": 3, 00:25:23.593 "ctrlr_loss_timeout_sec": 0, 00:25:23.593 "delay_cmd_submit": true, 00:25:23.593 "dhchap_dhgroups": [ 00:25:23.593 "null", 00:25:23.593 "ffdhe2048", 00:25:23.593 "ffdhe3072", 00:25:23.593 "ffdhe4096", 00:25:23.593 "ffdhe6144", 00:25:23.593 "ffdhe8192" 00:25:23.593 ], 00:25:23.593 "dhchap_digests": [ 00:25:23.593 "sha256", 00:25:23.593 "sha384", 00:25:23.593 "sha512" 00:25:23.593 ], 00:25:23.593 "disable_auto_failback": false, 00:25:23.593 "fast_io_fail_timeout_sec": 0, 00:25:23.593 "generate_uuids": false, 00:25:23.593 "high_priority_weight": 0, 00:25:23.593 "io_path_stat": false, 00:25:23.593 "io_queue_requests": 0, 00:25:23.593 "keep_alive_timeout_ms": 10000, 00:25:23.593 "low_priority_weight": 0, 00:25:23.593 "medium_priority_weight": 0, 00:25:23.593 "nvme_adminq_poll_period_us": 10000, 00:25:23.593 "nvme_error_stat": false, 00:25:23.593 "nvme_ioq_poll_period_us": 0, 00:25:23.593 "rdma_cm_event_timeout_ms": 0, 00:25:23.593 "rdma_max_cq_size": 0, 00:25:23.593 "rdma_srq_size": 0, 00:25:23.593 "reconnect_delay_sec": 0, 00:25:23.593 "timeout_admin_us": 0, 00:25:23.593 "timeout_us": 0, 00:25:23.593 "transport_ack_timeout": 0, 00:25:23.593 "transport_retry_count": 4, 00:25:23.593 "transport_tos": 0 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "bdev_nvme_set_hotplug", 00:25:23.593 "params": { 00:25:23.593 "enable": false, 00:25:23.593 "period_us": 100000 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "bdev_malloc_create", 00:25:23.593 "params": { 00:25:23.593 "block_size": 4096, 00:25:23.593 "name": "malloc0", 00:25:23.593 "num_blocks": 8192, 00:25:23.593 "optimal_io_boundary": 0, 00:25:23.593 "physical_block_size": 4096, 00:25:23.593 "uuid": "e1cc77f4-9028-4e8c-b61b-eb7ad648d3e9" 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "bdev_wait_for_examine" 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "nbd", 00:25:23.593 "config": [] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "scheduler", 00:25:23.593 "config": [ 00:25:23.593 { 00:25:23.593 "method": "framework_set_scheduler", 00:25:23.593 "params": { 00:25:23.593 "name": "static" 00:25:23.593 } 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "subsystem": "nvmf", 00:25:23.593 "config": [ 00:25:23.593 { 00:25:23.593 "method": "nvmf_set_config", 00:25:23.593 "params": { 00:25:23.593 "admin_cmd_passthru": { 00:25:23.593 "identify_ctrlr": false 00:25:23.593 }, 00:25:23.593 "discovery_filter": "match_any" 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "nvmf_set_max_subsystems", 00:25:23.593 "params": { 00:25:23.593 "max_subsystems": 1024 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "nvmf_set_crdt", 00:25:23.593 "params": { 00:25:23.593 "crdt1": 0, 00:25:23.593 "crdt2": 0, 00:25:23.593 "crdt3": 0 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "nvmf_create_transport", 00:25:23.593 "params": { 00:25:23.593 "abort_timeout_sec": 1, 00:25:23.593 "ack_timeout": 0, 00:25:23.593 "buf_cache_size": 4294967295, 00:25:23.593 "c2h_success": false, 00:25:23.593 "dif_insert_or_strip": false, 00:25:23.593 "in_capsule_data_size": 4096, 00:25:23.593 "io_unit_size": 131072, 00:25:23.593 "max_aq_depth": 128, 00:25:23.593 "max_io_qpairs_per_ctrlr": 127, 00:25:23.593 "max_io_size": 131072, 00:25:23.593 "max_queue_depth": 128, 00:25:23.593 "num_shared_buffers": 511, 00:25:23.593 "sock_priority": 0, 00:25:23.593 "trtype": "TCP", 00:25:23.593 "zcopy": false 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "nvmf_create_subsystem", 00:25:23.593 "params": { 00:25:23.593 "allow_any_host": false, 00:25:23.593 "ana_reporting": false, 00:25:23.593 "max_cntlid": 65519, 00:25:23.593 "max_namespaces": 32, 00:25:23.593 "min_cntlid": 1, 00:25:23.593 "model_number": "SPDK bdev Controller", 00:25:23.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.593 "serial_number": "00000000000000000000" 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "nvmf_subsystem_add_host", 00:25:23.593 "params": { 00:25:23.593 "host": "nqn.2016-06.io.spdk:host1", 00:25:23.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.593 "psk": "key0" 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "nvmf_subsystem_add_ns", 00:25:23.593 "params": { 00:25:23.593 "namespace": { 00:25:23.593 "bdev_name": "malloc0", 00:25:23.593 "nguid": "E1CC77F490284E8CB61BEB7AD648D3E9", 00:25:23.593 "no_auto_visible": false, 00:25:23.593 "nsid": 1, 00:25:23.593 "uuid": "e1cc77f4-9028-4e8c-b61b-eb7ad648d3e9" 00:25:23.593 }, 00:25:23.593 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:23.593 } 00:25:23.593 }, 00:25:23.593 { 00:25:23.593 "method": "nvmf_subsystem_add_listener", 00:25:23.593 "params": { 00:25:23.593 "listen_address": { 00:25:23.593 "adrfam": "IPv4", 00:25:23.593 "traddr": "10.0.0.2", 00:25:23.593 "trsvcid": "4420", 00:25:23.593 "trtype": "TCP" 00:25:23.593 }, 00:25:23.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.593 "secure_channel": true 00:25:23.593 } 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 } 00:25:23.593 ] 00:25:23.593 }' 00:25:23.593 11:12:51 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:23.851 11:12:52 -- target/tls.sh@264 -- # bperfcfg='{ 00:25:23.851 "subsystems": [ 00:25:23.851 { 00:25:23.851 "subsystem": "keyring", 00:25:23.851 "config": [ 00:25:23.851 { 00:25:23.851 "method": "keyring_file_add_key", 00:25:23.851 "params": { 00:25:23.851 "name": "key0", 00:25:23.851 "path": "/tmp/tmp.crRmH0pMpj" 00:25:23.851 } 00:25:23.851 } 00:25:23.851 ] 00:25:23.851 }, 00:25:23.851 { 00:25:23.851 "subsystem": "iobuf", 00:25:23.851 "config": [ 00:25:23.851 { 00:25:23.851 "method": "iobuf_set_options", 00:25:23.851 "params": { 00:25:23.851 "large_bufsize": 135168, 00:25:23.851 "large_pool_count": 1024, 00:25:23.851 "small_bufsize": 8192, 00:25:23.851 "small_pool_count": 8192 00:25:23.851 } 00:25:23.851 } 00:25:23.851 ] 00:25:23.851 }, 00:25:23.851 { 00:25:23.851 "subsystem": "sock", 00:25:23.851 "config": [ 00:25:23.851 { 00:25:23.851 "method": "sock_impl_set_options", 00:25:23.851 "params": { 00:25:23.851 "enable_ktls": false, 00:25:23.851 "enable_placement_id": 0, 00:25:23.851 "enable_quickack": false, 00:25:23.851 "enable_recv_pipe": true, 00:25:23.851 "enable_zerocopy_send_client": false, 00:25:23.851 "enable_zerocopy_send_server": true, 00:25:23.851 "impl_name": "posix", 00:25:23.851 "recv_buf_size": 2097152, 00:25:23.851 "send_buf_size": 2097152, 00:25:23.851 "tls_version": 0, 00:25:23.851 "zerocopy_threshold": 0 00:25:23.851 } 00:25:23.851 }, 00:25:23.851 { 00:25:23.851 "method": "sock_impl_set_options", 00:25:23.851 "params": { 00:25:23.851 "enable_ktls": false, 00:25:23.851 "enable_placement_id": 0, 00:25:23.851 "enable_quickack": false, 00:25:23.851 "enable_recv_pipe": true, 00:25:23.851 "enable_zerocopy_send_client": false, 00:25:23.851 "enable_zerocopy_send_server": true, 00:25:23.851 "impl_name": "ssl", 00:25:23.851 "recv_buf_size": 4096, 00:25:23.851 "send_buf_size": 4096, 00:25:23.851 "tls_version": 0, 00:25:23.852 "zerocopy_threshold": 0 00:25:23.852 } 00:25:23.852 } 00:25:23.852 ] 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "subsystem": "vmd", 00:25:23.852 "config": [] 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "subsystem": "accel", 00:25:23.852 "config": [ 00:25:23.852 { 00:25:23.852 "method": "accel_set_options", 00:25:23.852 "params": { 00:25:23.852 "buf_count": 2048, 00:25:23.852 "large_cache_size": 16, 00:25:23.852 "sequence_count": 2048, 00:25:23.852 "small_cache_size": 128, 00:25:23.852 "task_count": 2048 00:25:23.852 } 00:25:23.852 } 00:25:23.852 ] 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "subsystem": "bdev", 00:25:23.852 "config": [ 00:25:23.852 { 00:25:23.852 "method": "bdev_set_options", 00:25:23.852 "params": { 00:25:23.852 "bdev_auto_examine": true, 00:25:23.852 "bdev_io_cache_size": 256, 00:25:23.852 "bdev_io_pool_size": 65535, 00:25:23.852 "iobuf_large_cache_size": 16, 00:25:23.852 "iobuf_small_cache_size": 128 00:25:23.852 } 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "method": "bdev_raid_set_options", 00:25:23.852 "params": { 00:25:23.852 "process_window_size_kb": 1024 00:25:23.852 } 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "method": "bdev_iscsi_set_options", 00:25:23.852 "params": { 00:25:23.852 "timeout_sec": 30 00:25:23.852 } 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "method": "bdev_nvme_set_options", 00:25:23.852 "params": { 00:25:23.852 "action_on_timeout": "none", 00:25:23.852 "allow_accel_sequence": false, 00:25:23.852 "arbitration_burst": 0, 00:25:23.852 "bdev_retry_count": 3, 00:25:23.852 "ctrlr_loss_timeout_sec": 0, 00:25:23.852 "delay_cmd_submit": true, 00:25:23.852 "dhchap_dhgroups": [ 00:25:23.852 "null", 00:25:23.852 "ffdhe2048", 00:25:23.852 "ffdhe3072", 00:25:23.852 "ffdhe4096", 00:25:23.852 "ffdhe6144", 00:25:23.852 "ffdhe8192" 00:25:23.852 ], 00:25:23.852 "dhchap_digests": [ 00:25:23.852 "sha256", 00:25:23.852 "sha384", 00:25:23.852 "sha512" 00:25:23.852 ], 00:25:23.852 "disable_auto_failback": false, 00:25:23.852 "fast_io_fail_timeout_sec": 0, 00:25:23.852 "generate_uuids": false, 00:25:23.852 "high_priority_weight": 0, 00:25:23.852 "io_path_stat": false, 00:25:23.852 "io_queue_requests": 512, 00:25:23.852 "keep_alive_timeout_ms": 10000, 00:25:23.852 "low_priority_weight": 0, 00:25:23.852 "medium_priority_weight": 0, 00:25:23.852 "nvme_adminq_poll_period_us": 10000, 00:25:23.852 "nvme_error_stat": false, 00:25:23.852 "nvme_ioq_poll_period_us": 0, 00:25:23.852 "rdma_cm_event_timeout_ms": 0, 00:25:23.852 "rdma_max_cq_size": 0, 00:25:23.852 "rdma_srq_size": 0, 00:25:23.852 "reconnect_delay_sec": 0, 00:25:23.852 "timeout_admin_us": 0, 00:25:23.852 "timeout_us": 0, 00:25:23.852 "transport_ack_timeout": 0, 00:25:23.852 "transport_retry_count": 4, 00:25:23.852 "transport_tos": 0 00:25:23.852 } 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "method": "bdev_nvme_attach_controller", 00:25:23.852 "params": { 00:25:23.852 "adrfam": "IPv4", 00:25:23.852 "ctrlr_loss_timeout_sec": 0, 00:25:23.852 "ddgst": false, 00:25:23.852 "fast_io_fail_timeout_sec": 0, 00:25:23.852 "hdgst": false, 00:25:23.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:23.852 "name": "nvme0", 00:25:23.852 "prchk_guard": false, 00:25:23.852 "prchk_reftag": false, 00:25:23.852 "psk": "key0", 00:25:23.852 "reconnect_delay_sec": 0, 00:25:23.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.852 "traddr": "10.0.0.2", 00:25:23.852 "trsvcid": "4420", 00:25:23.852 "trtype": "TCP" 00:25:23.852 } 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "method": "bdev_nvme_set_hotplug", 00:25:23.852 "params": { 00:25:23.852 "enable": false, 00:25:23.852 "period_us": 100000 00:25:23.852 } 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "method": "bdev_enable_histogram", 00:25:23.852 "params": { 00:25:23.852 "enable": true, 00:25:23.852 "name": "nvme0n1" 00:25:23.852 } 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "method": "bdev_wait_for_examine" 00:25:23.852 } 00:25:23.852 ] 00:25:23.852 }, 00:25:23.852 { 00:25:23.852 "subsystem": "nbd", 00:25:23.852 "config": [] 00:25:23.852 } 00:25:23.852 ] 00:25:23.852 }' 00:25:23.852 11:12:52 -- target/tls.sh@266 -- # killprocess 95053 00:25:23.852 11:12:52 -- common/autotest_common.sh@936 -- # '[' -z 95053 ']' 00:25:23.852 11:12:52 -- common/autotest_common.sh@940 -- # kill -0 95053 00:25:23.852 11:12:52 -- common/autotest_common.sh@941 -- # uname 00:25:23.852 11:12:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:23.852 11:12:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95053 00:25:23.852 11:12:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:23.852 killing process with pid 95053 00:25:23.852 Received shutdown signal, test time was about 1.000000 seconds 00:25:23.852 00:25:23.852 Latency(us) 00:25:23.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.852 =================================================================================================================== 00:25:23.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.852 11:12:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:23.852 11:12:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95053' 00:25:23.852 11:12:52 -- common/autotest_common.sh@955 -- # kill 95053 00:25:23.852 11:12:52 -- common/autotest_common.sh@960 -- # wait 95053 00:25:24.110 11:12:52 -- target/tls.sh@267 -- # killprocess 95003 00:25:24.110 11:12:52 -- common/autotest_common.sh@936 -- # '[' -z 95003 ']' 00:25:24.110 11:12:52 -- common/autotest_common.sh@940 -- # kill -0 95003 00:25:24.110 11:12:52 -- common/autotest_common.sh@941 -- # uname 00:25:24.110 11:12:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.110 11:12:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95003 00:25:24.110 11:12:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:24.110 killing process with pid 95003 00:25:24.110 11:12:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:24.110 11:12:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95003' 00:25:24.110 11:12:52 -- common/autotest_common.sh@955 -- # kill 95003 00:25:24.110 11:12:52 -- common/autotest_common.sh@960 -- # wait 95003 00:25:24.368 11:12:52 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:24.368 11:12:52 -- target/tls.sh@269 -- # echo '{ 00:25:24.368 "subsystems": [ 00:25:24.369 { 00:25:24.369 "subsystem": "keyring", 00:25:24.369 "config": [ 00:25:24.369 { 00:25:24.369 "method": "keyring_file_add_key", 00:25:24.369 "params": { 00:25:24.369 "name": "key0", 00:25:24.369 "path": "/tmp/tmp.crRmH0pMpj" 00:25:24.369 } 00:25:24.369 } 00:25:24.369 ] 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "subsystem": "iobuf", 00:25:24.369 "config": [ 00:25:24.369 { 00:25:24.369 "method": "iobuf_set_options", 00:25:24.369 "params": { 00:25:24.369 "large_bufsize": 135168, 00:25:24.369 "large_pool_count": 1024, 00:25:24.369 "small_bufsize": 8192, 00:25:24.369 "small_pool_count": 8192 00:25:24.369 } 00:25:24.369 } 00:25:24.369 ] 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "subsystem": "sock", 00:25:24.369 "config": [ 00:25:24.369 { 00:25:24.369 "method": "sock_impl_set_options", 00:25:24.369 "params": { 00:25:24.369 "enable_ktls": false, 00:25:24.369 "enable_placement_id": 0, 00:25:24.369 "enable_quickack": false, 00:25:24.369 "enable_recv_pipe": true, 00:25:24.369 "enable_zerocopy_send_client": false, 00:25:24.369 "enable_zerocopy_send_server": true, 00:25:24.369 "impl_name": "posix", 00:25:24.369 "recv_buf_size": 2097152, 00:25:24.369 "send_buf_size": 2097152, 00:25:24.369 "tls_version": 0, 00:25:24.369 "zerocopy_threshold": 0 00:25:24.369 } 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "method": "sock_impl_set_options", 00:25:24.369 "params": { 00:25:24.369 "enable_ktls": false, 00:25:24.369 "enable_placement_id": 0, 00:25:24.369 "enable_quickack": false, 00:25:24.369 "enable_recv_pipe": true, 00:25:24.369 "enable_zerocopy_send_client": false, 00:25:24.369 "enable_zerocopy_send_server": true, 00:25:24.369 "impl_name": "ssl", 00:25:24.369 "recv_buf_size": 4096, 00:25:24.369 "send_buf_size": 4096, 00:25:24.369 "tls_version": 0, 00:25:24.369 "zerocopy_threshold": 0 00:25:24.369 } 00:25:24.369 } 00:25:24.369 ] 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "subsystem": "vmd", 00:25:24.369 "config": [] 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "subsystem": "accel", 00:25:24.369 "config": [ 00:25:24.369 { 00:25:24.369 "method": "accel_set_options", 00:25:24.369 "params": { 00:25:24.369 "buf_count": 2048, 00:25:24.369 "large_cache_size": 16, 00:25:24.369 "sequence_count": 2048, 00:25:24.369 "small_cache_size": 128, 00:25:24.369 "task_count": 2048 00:25:24.369 } 00:25:24.369 } 00:25:24.369 ] 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "subsystem": "bdev", 00:25:24.369 "config": [ 00:25:24.369 { 00:25:24.369 "method": "bdev_set_options", 00:25:24.369 "params": { 00:25:24.369 "bdev_auto_examine": true, 00:25:24.369 "bdev_io_cache_size": 256, 00:25:24.369 "bdev_io_pool_size": 65535, 00:25:24.369 "iobuf_large_cache_size": 16, 00:25:24.369 "iobuf_small_cache_size": 128 00:25:24.369 } 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "method": "bdev_raid_set_options", 00:25:24.369 "params": { 00:25:24.369 "process_window_size_kb": 1024 00:25:24.369 } 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "method": "bdev_iscsi_set_options", 00:25:24.369 "params": { 00:25:24.369 "timeout_sec": 30 00:25:24.369 } 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "method": "bdev_nvme_set_options", 00:25:24.369 "params": { 00:25:24.369 "action_on_timeout": "none", 00:25:24.369 "allow_accel_sequence": false, 00:25:24.369 "arbitration_burst": 0, 00:25:24.369 "bdev_retry_count": 3, 00:25:24.369 "ctrlr_loss_timeout_sec": 0, 00:25:24.369 "delay_cmd_submit": true, 00:25:24.369 "dhchap_dhgroups": [ 00:25:24.369 "null", 00:25:24.369 "ffdhe2048", 00:25:24.369 "ffdhe3072", 00:25:24.369 "ffdhe4096", 00:25:24.369 "ffdhe6144", 00:25:24.369 "ffdhe8192" 00:25:24.369 ], 00:25:24.369 "dhchap_digests": [ 00:25:24.369 "sha256", 00:25:24.369 "sha384", 00:25:24.369 "sha512" 00:25:24.369 ], 00:25:24.369 "disable_auto_failback": false, 00:25:24.369 "fast_io_fail_timeout_sec": 0, 00:25:24.369 "generate_uuids": false, 00:25:24.369 "high_priority_weight": 0, 00:25:24.369 "io_path_stat": false, 00:25:24.369 "io_queue_requests": 0, 00:25:24.369 "keep_alive_timeout_ms": 10000, 00:25:24.369 "low_priority_weight": 0, 00:25:24.369 "medium_priority_weight": 0, 00:25:24.369 "nvme_adminq_poll_period_us": 10000, 00:25:24.369 "nvme_error_stat": false, 00:25:24.369 "nvme_ioq_poll_period_us": 0, 00:25:24.369 "rdma_cm_event_timeout_ms": 0, 00:25:24.369 "rdma_max_cq_size": 0, 00:25:24.369 "rdma_srq_size": 0, 00:25:24.369 "reconnect_delay_sec": 0, 00:25:24.369 "timeout_admin_us": 0, 00:25:24.369 "timeout_us": 0, 00:25:24.369 "transport_ack_timeout": 0, 00:25:24.369 "transport_retry_count": 4, 00:25:24.369 "transport_tos": 0 00:25:24.369 } 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "method": "bdev_nvme_set_hotplug", 00:25:24.369 "params": { 00:25:24.369 "enable": false, 00:25:24.369 "period_us": 100000 00:25:24.369 } 00:25:24.369 }, 00:25:24.369 { 00:25:24.369 "method": "bdev_malloc_create", 00:25:24.369 "params": { 00:25:24.369 "block_size": 4096, 00:25:24.369 "name": "malloc0", 00:25:24.369 "num_blocks": 8192, 00:25:24.369 "optimal_io_boundary": 0, 00:25:24.369 "physical_block_size": 4096, 00:25:24.370 "uuid": "e1cc77f4-9028-4e8c-b61b-eb7ad648d3e9" 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "bdev_wait_for_examine" 00:25:24.370 } 00:25:24.370 ] 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "subsystem": "nbd", 00:25:24.370 "config": [] 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "subsystem": "scheduler", 00:25:24.370 "config": [ 00:25:24.370 { 00:25:24.370 "method": "framework_set_scheduler", 00:25:24.370 "params": { 00:25:24.370 "name": "static" 00:25:24.370 } 00:25:24.370 } 00:25:24.370 ] 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "subsystem": "nvmf", 00:25:24.370 "config": [ 00:25:24.370 { 00:25:24.370 "method": "nvmf_set_config", 00:25:24.370 "params": { 00:25:24.370 "admin_cmd_passthru": { 00:25:24.370 "identify_ctrlr": false 00:25:24.370 }, 00:25:24.370 "discovery_filter": "match_any" 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "nvmf_set_max_subsystems", 00:25:24.370 "params": { 00:25:24.370 "max_subsystems": 1024 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "nvmf_set_crdt", 00:25:24.370 "params": { 00:25:24.370 "crdt1": 0, 00:25:24.370 "crdt2": 0, 00:25:24.370 "crdt3": 0 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "nvmf_create_transport", 00:25:24.370 "params": { 00:25:24.370 "abort_timeout_sec": 1, 00:25:24.370 "ack_timeout": 0, 00:25:24.370 "buf_cache_size": 4294967295, 00:25:24.370 "c2h_success": false, 00:25:24.370 "dif_insert_or_strip": false, 00:25:24.370 "in_capsule_data_size": 4096, 00:25:24.370 "io_unit_size": 131072, 00:25:24.370 "max_aq_depth": 128, 00:25:24.370 "max_io_qpairs_per_ctrlr": 127, 00:25:24.370 "max_io_size": 131072, 00:25:24.370 "max_queue_depth": 128, 00:25:24.370 "num_shared_buffers": 511, 00:25:24.370 "sock_priority": 0, 00:25:24.370 "trtype": "TCP", 00:25:24.370 "zcopy": false 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "nvmf_create_subsystem", 00:25:24.370 "params": { 00:25:24.370 "allow_any_host": false, 00:25:24.370 "ana_reporting": false, 00:25:24.370 "max_cntlid": 65519, 00:25:24.370 "max_namespaces": 32, 00:25:24.370 "min_cntlid": 1, 00:25:24.370 "model_number": "SPDK bdev Controller", 00:25:24.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.370 "serial_number": "00000000000000000000" 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "nvmf_subsystem_add_host", 00:25:24.370 "params": { 00:25:24.370 "host": "nqn.2016-06.io.spdk:host1", 00:25:24.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.370 "psk": "key0" 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "nvmf_subsystem_add_ns", 00:25:24.370 "params": { 00:25:24.370 "namespace": { 00:25:24.370 "bdev_name": "malloc0", 00:25:24.370 "nguid": "E1CC77F490284E8CB61BEB7AD648D3E9", 00:25:24.370 "no_auto_visible": false, 00:25:24.370 "nsid": 1, 00:25:24.370 "uuid": "e1cc77f4-9028-4e8c-b61b-eb7ad648d3e9" 00:25:24.370 }, 00:25:24.370 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:24.370 } 00:25:24.370 }, 00:25:24.370 { 00:25:24.370 "method": "nvmf_subsystem_add_listener", 00:25:24.370 "params": { 00:25:24.370 "listen_address": { 00:25:24.370 "adrfam": "IPv4", 00:25:24.370 "traddr": "10.0.0.2", 00:25:24.370 "trsvcid": "4420", 00:25:24.370 "trtype": "TCP" 00:25:24.370 }, 00:25:24.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.370 "secure_channel": true 00:25:24.370 } 00:25:24.370 } 00:25:24.370 ] 00:25:24.370 } 00:25:24.370 ] 00:25:24.370 }' 00:25:24.370 11:12:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:24.370 11:12:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:24.370 11:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:24.370 11:12:52 -- nvmf/common.sh@470 -- # nvmfpid=95144 00:25:24.370 11:12:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:24.370 11:12:52 -- nvmf/common.sh@471 -- # waitforlisten 95144 00:25:24.370 11:12:52 -- common/autotest_common.sh@817 -- # '[' -z 95144 ']' 00:25:24.370 11:12:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.370 11:12:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.370 11:12:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.370 11:12:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:24.370 11:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:24.370 [2024-04-18 11:12:52.837621] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:24.370 [2024-04-18 11:12:52.837760] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.370 [2024-04-18 11:12:52.975163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.629 [2024-04-18 11:12:53.076531] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.629 [2024-04-18 11:12:53.076592] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.629 [2024-04-18 11:12:53.076605] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.629 [2024-04-18 11:12:53.076614] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.629 [2024-04-18 11:12:53.076622] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.629 [2024-04-18 11:12:53.076740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.886 [2024-04-18 11:12:53.305743] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.886 [2024-04-18 11:12:53.337734] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.886 [2024-04-18 11:12:53.338022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.451 11:12:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:25.451 11:12:53 -- common/autotest_common.sh@850 -- # return 0 00:25:25.451 11:12:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:25.451 11:12:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:25.451 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:25.451 11:12:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.451 11:12:53 -- target/tls.sh@272 -- # bdevperf_pid=95188 00:25:25.451 11:12:53 -- target/tls.sh@273 -- # waitforlisten 95188 /var/tmp/bdevperf.sock 00:25:25.451 11:12:53 -- common/autotest_common.sh@817 -- # '[' -z 95188 ']' 00:25:25.451 11:12:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.451 11:12:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.451 11:12:53 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:25.451 11:12:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.451 11:12:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.451 11:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:25.451 11:12:53 -- target/tls.sh@270 -- # echo '{ 00:25:25.451 "subsystems": [ 00:25:25.451 { 00:25:25.451 "subsystem": "keyring", 00:25:25.451 "config": [ 00:25:25.451 { 00:25:25.451 "method": "keyring_file_add_key", 00:25:25.451 "params": { 00:25:25.451 "name": "key0", 00:25:25.451 "path": "/tmp/tmp.crRmH0pMpj" 00:25:25.451 } 00:25:25.451 } 00:25:25.451 ] 00:25:25.451 }, 00:25:25.451 { 00:25:25.451 "subsystem": "iobuf", 00:25:25.451 "config": [ 00:25:25.451 { 00:25:25.451 "method": "iobuf_set_options", 00:25:25.451 "params": { 00:25:25.451 "large_bufsize": 135168, 00:25:25.451 "large_pool_count": 1024, 00:25:25.451 "small_bufsize": 8192, 00:25:25.451 "small_pool_count": 8192 00:25:25.451 } 00:25:25.451 } 00:25:25.451 ] 00:25:25.451 }, 00:25:25.451 { 00:25:25.451 "subsystem": "sock", 00:25:25.451 "config": [ 00:25:25.451 { 00:25:25.451 "method": "sock_impl_set_options", 00:25:25.451 "params": { 00:25:25.451 "enable_ktls": false, 00:25:25.451 "enable_placement_id": 0, 00:25:25.451 "enable_quickack": false, 00:25:25.451 "enable_recv_pipe": true, 00:25:25.451 "enable_zerocopy_send_client": false, 00:25:25.451 "enable_zerocopy_send_server": true, 00:25:25.451 "impl_name": "posix", 00:25:25.451 "recv_buf_size": 2097152, 00:25:25.451 "send_buf_size": 2097152, 00:25:25.451 "tls_version": 0, 00:25:25.451 "zerocopy_threshold": 0 00:25:25.451 } 00:25:25.451 }, 00:25:25.451 { 00:25:25.451 "method": "sock_impl_set_options", 00:25:25.451 "params": { 00:25:25.451 "enable_ktls": false, 00:25:25.451 "enable_placement_id": 0, 00:25:25.451 "enable_quickack": false, 00:25:25.451 "enable_recv_pipe": true, 00:25:25.451 "enable_zerocopy_send_client": false, 00:25:25.451 "enable_zerocopy_send_server": true, 00:25:25.451 "impl_name": "ssl", 00:25:25.451 "recv_buf_size": 4096, 00:25:25.451 "send_buf_size": 4096, 00:25:25.451 "tls_version": 0, 00:25:25.451 "zerocopy_threshold": 0 00:25:25.451 } 00:25:25.451 } 00:25:25.451 ] 00:25:25.451 }, 00:25:25.451 { 00:25:25.452 "subsystem": "vmd", 00:25:25.452 "config": [] 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "subsystem": "accel", 00:25:25.452 "config": [ 00:25:25.452 { 00:25:25.452 "method": "accel_set_options", 00:25:25.452 "params": { 00:25:25.452 "buf_count": 2048, 00:25:25.452 "large_cache_size": 16, 00:25:25.452 "sequence_count": 2048, 00:25:25.452 "small_cache_size": 128, 00:25:25.452 "task_count": 2048 00:25:25.452 } 00:25:25.452 } 00:25:25.452 ] 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "subsystem": "bdev", 00:25:25.452 "config": [ 00:25:25.452 { 00:25:25.452 "method": "bdev_set_options", 00:25:25.452 "params": { 00:25:25.452 "bdev_auto_examine": true, 00:25:25.452 "bdev_io_cache_size": 256, 00:25:25.452 "bdev_io_pool_size": 65535, 00:25:25.452 "iobuf_large_cache_size": 16, 00:25:25.452 "iobuf_small_cache_size": 128 00:25:25.452 } 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "method": "bdev_raid_set_options", 00:25:25.452 "params": { 00:25:25.452 "process_window_size_kb": 1024 00:25:25.452 } 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "method": "bdev_iscsi_set_options", 00:25:25.452 "params": { 00:25:25.452 "timeout_sec": 30 00:25:25.452 } 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "method": "bdev_nvme_set_options", 00:25:25.452 "params": { 00:25:25.452 "action_on_timeout": "none", 00:25:25.452 "allow_accel_sequence": false, 00:25:25.452 "arbitration_burst": 0, 00:25:25.452 "bdev_retry_count": 3, 00:25:25.452 "ctrlr_loss_timeout_sec": 0, 00:25:25.452 "delay_cmd_submit": true, 00:25:25.452 "dhchap_dhgroups": [ 00:25:25.452 "null", 00:25:25.452 "ffdhe2048", 00:25:25.452 "ffdhe3072", 00:25:25.452 "ffdhe4096", 00:25:25.452 "ffdhe6144", 00:25:25.452 "ffdhe8192" 00:25:25.452 ], 00:25:25.452 "dhchap_digests": [ 00:25:25.452 "sha256", 00:25:25.452 "sha384", 00:25:25.452 "sha512" 00:25:25.452 ], 00:25:25.452 "disable_auto_failback": false, 00:25:25.452 "fast_io_fail_timeout_sec": 0, 00:25:25.452 "generate_uuids": false, 00:25:25.452 "high_priority_weight": 0, 00:25:25.452 "io_path_stat": false, 00:25:25.452 "io_queue_requests": 512, 00:25:25.452 "keep_alive_timeout_ms": 10000, 00:25:25.452 "low_priority_weight": 0, 00:25:25.452 "medium_priority_weight": 0, 00:25:25.452 "nvme_adminq_poll_period_us": 10000, 00:25:25.452 "nvme_error_stat": false, 00:25:25.452 "nvme_ioq_poll_period_us": 0, 00:25:25.452 "rdma_cm_event_timeout_ms": 0, 00:25:25.452 "rdma_max_cq_size": 0, 00:25:25.452 "rdma_srq_size": 0, 00:25:25.452 "reconnect_delay_sec": 0, 00:25:25.452 "timeout_admin_us": 0, 00:25:25.452 "timeout_us": 0, 00:25:25.452 "transport_ack_timeout": 0, 00:25:25.452 "transport_retry_count": 4, 00:25:25.452 "transport_tos": 0 00:25:25.452 } 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "method": "bdev_nvme_attach_controller", 00:25:25.452 "params": { 00:25:25.452 "adrfam": "IPv4", 00:25:25.452 "ctrlr_loss_timeout_sec": 0, 00:25:25.452 "ddgst": false, 00:25:25.452 "fast_io_fail_timeout_sec": 0, 00:25:25.452 "hdgst": false, 00:25:25.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.452 "name": "nvme0", 00:25:25.452 "prchk_guard": false, 00:25:25.452 "prchk_reftag": false, 00:25:25.452 "psk": "key0", 00:25:25.452 "reconnect_delay_sec": 0, 00:25:25.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.452 "traddr": "10.0.0.2", 00:25:25.452 "trsvcid": "4420", 00:25:25.452 "trtype": "TCP" 00:25:25.452 } 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "method": "bdev_nvme_set_hotplug", 00:25:25.452 "params": { 00:25:25.452 "enable": false, 00:25:25.452 "period_us": 100000 00:25:25.452 } 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "method": "bdev_enable_histogram", 00:25:25.452 "params": { 00:25:25.452 "enable": true, 00:25:25.452 "name": "nvme0n1" 00:25:25.452 } 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "method": "bdev_wait_for_examine" 00:25:25.452 } 00:25:25.452 ] 00:25:25.452 }, 00:25:25.452 { 00:25:25.452 "subsystem": "nbd", 00:25:25.452 "config": [] 00:25:25.452 } 00:25:25.452 ] 00:25:25.452 }' 00:25:25.452 [2024-04-18 11:12:53.903820] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:25.452 [2024-04-18 11:12:53.903912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95188 ] 00:25:25.452 [2024-04-18 11:12:54.036978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.710 [2024-04-18 11:12:54.141680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.710 [2024-04-18 11:12:54.304437] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:26.276 11:12:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.276 11:12:54 -- common/autotest_common.sh@850 -- # return 0 00:25:26.276 11:12:54 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.276 11:12:54 -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:26.842 11:12:55 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.842 11:12:55 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:26.842 Running I/O for 1 seconds... 00:25:27.777 00:25:27.777 Latency(us) 00:25:27.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.777 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.777 Verification LBA range: start 0x0 length 0x2000 00:25:27.777 nvme0n1 : 1.03 3706.26 14.48 0.00 0.00 34092.21 7149.38 20137.43 00:25:27.777 =================================================================================================================== 00:25:27.777 Total : 3706.26 14.48 0.00 0.00 34092.21 7149.38 20137.43 00:25:27.777 0 00:25:27.777 11:12:56 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:27.777 11:12:56 -- target/tls.sh@279 -- # cleanup 00:25:27.777 11:12:56 -- target/tls.sh@15 -- # process_shm --id 0 00:25:27.777 11:12:56 -- common/autotest_common.sh@794 -- # type=--id 00:25:27.777 11:12:56 -- common/autotest_common.sh@795 -- # id=0 00:25:27.777 11:12:56 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:25:27.777 11:12:56 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:27.777 11:12:56 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:25:27.777 11:12:56 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:25:27.777 11:12:56 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:25:27.777 11:12:56 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:27.777 nvmf_trace.0 00:25:28.035 11:12:56 -- common/autotest_common.sh@809 -- # return 0 00:25:28.035 11:12:56 -- target/tls.sh@16 -- # killprocess 95188 00:25:28.035 11:12:56 -- common/autotest_common.sh@936 -- # '[' -z 95188 ']' 00:25:28.035 11:12:56 -- common/autotest_common.sh@940 -- # kill -0 95188 00:25:28.035 11:12:56 -- common/autotest_common.sh@941 -- # uname 00:25:28.035 11:12:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:28.035 11:12:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95188 00:25:28.035 11:12:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:28.035 killing process with pid 95188 00:25:28.035 11:12:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:28.035 11:12:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95188' 00:25:28.035 11:12:56 -- common/autotest_common.sh@955 -- # kill 95188 00:25:28.036 Received shutdown signal, test time was about 1.000000 seconds 00:25:28.036 00:25:28.036 Latency(us) 00:25:28.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.036 =================================================================================================================== 00:25:28.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.036 11:12:56 -- common/autotest_common.sh@960 -- # wait 95188 00:25:28.036 11:12:56 -- target/tls.sh@17 -- # nvmftestfini 00:25:28.036 11:12:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:28.036 11:12:56 -- nvmf/common.sh@117 -- # sync 00:25:28.295 11:12:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.295 11:12:56 -- nvmf/common.sh@120 -- # set +e 00:25:28.295 11:12:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.295 11:12:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.295 rmmod nvme_tcp 00:25:28.295 rmmod nvme_fabrics 00:25:28.295 rmmod nvme_keyring 00:25:28.295 11:12:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.295 11:12:56 -- nvmf/common.sh@124 -- # set -e 00:25:28.295 11:12:56 -- nvmf/common.sh@125 -- # return 0 00:25:28.295 11:12:56 -- nvmf/common.sh@478 -- # '[' -n 95144 ']' 00:25:28.295 11:12:56 -- nvmf/common.sh@479 -- # killprocess 95144 00:25:28.295 11:12:56 -- common/autotest_common.sh@936 -- # '[' -z 95144 ']' 00:25:28.295 11:12:56 -- common/autotest_common.sh@940 -- # kill -0 95144 00:25:28.295 11:12:56 -- common/autotest_common.sh@941 -- # uname 00:25:28.295 11:12:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:28.295 11:12:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95144 00:25:28.295 11:12:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:28.295 killing process with pid 95144 00:25:28.295 11:12:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:28.295 11:12:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95144' 00:25:28.295 11:12:56 -- common/autotest_common.sh@955 -- # kill 95144 00:25:28.295 11:12:56 -- common/autotest_common.sh@960 -- # wait 95144 00:25:28.553 11:12:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:28.553 11:12:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:28.553 11:12:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:28.553 11:12:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.553 11:12:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.553 11:12:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.553 11:12:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.553 11:12:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.553 11:12:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:28.553 11:12:57 -- target/tls.sh@18 -- # rm -f /tmp/tmp.C023K450lU /tmp/tmp.fzyX5LWcRd /tmp/tmp.crRmH0pMpj 00:25:28.553 ************************************ 00:25:28.553 END TEST nvmf_tls 00:25:28.553 ************************************ 00:25:28.553 00:25:28.553 real 1m26.896s 00:25:28.553 user 2m18.666s 00:25:28.553 sys 0m28.111s 00:25:28.553 11:12:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:28.553 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:28.553 11:12:57 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:28.553 11:12:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:28.553 11:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.553 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:28.553 ************************************ 00:25:28.553 START TEST nvmf_fips 00:25:28.553 ************************************ 00:25:28.553 11:12:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:28.813 * Looking for test storage... 00:25:28.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:25:28.813 11:12:57 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.813 11:12:57 -- nvmf/common.sh@7 -- # uname -s 00:25:28.813 11:12:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.813 11:12:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.813 11:12:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.813 11:12:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.813 11:12:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.813 11:12:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.813 11:12:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.813 11:12:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.813 11:12:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.813 11:12:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.813 11:12:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:25:28.813 11:12:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:25:28.813 11:12:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.813 11:12:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.813 11:12:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.813 11:12:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.813 11:12:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.813 11:12:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.813 11:12:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.813 11:12:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.813 11:12:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.813 11:12:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.813 11:12:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.813 11:12:57 -- paths/export.sh@5 -- # export PATH 00:25:28.813 11:12:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.813 11:12:57 -- nvmf/common.sh@47 -- # : 0 00:25:28.813 11:12:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.813 11:12:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.813 11:12:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.813 11:12:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.813 11:12:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.813 11:12:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.813 11:12:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.813 11:12:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.813 11:12:57 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.813 11:12:57 -- fips/fips.sh@89 -- # check_openssl_version 00:25:28.813 11:12:57 -- fips/fips.sh@83 -- # local target=3.0.0 00:25:28.813 11:12:57 -- fips/fips.sh@85 -- # openssl version 00:25:28.813 11:12:57 -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:28.813 11:12:57 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:28.813 11:12:57 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:28.813 11:12:57 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:28.813 11:12:57 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:28.813 11:12:57 -- scripts/common.sh@333 -- # IFS=.-: 00:25:28.813 11:12:57 -- scripts/common.sh@333 -- # read -ra ver1 00:25:28.813 11:12:57 -- scripts/common.sh@334 -- # IFS=.-: 00:25:28.813 11:12:57 -- scripts/common.sh@334 -- # read -ra ver2 00:25:28.813 11:12:57 -- scripts/common.sh@335 -- # local 'op=>=' 00:25:28.813 11:12:57 -- scripts/common.sh@337 -- # ver1_l=3 00:25:28.813 11:12:57 -- scripts/common.sh@338 -- # ver2_l=3 00:25:28.813 11:12:57 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:28.813 11:12:57 -- scripts/common.sh@341 -- # case "$op" in 00:25:28.813 11:12:57 -- scripts/common.sh@345 -- # : 1 00:25:28.813 11:12:57 -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:28.813 11:12:57 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.813 11:12:57 -- scripts/common.sh@362 -- # decimal 3 00:25:28.813 11:12:57 -- scripts/common.sh@350 -- # local d=3 00:25:28.813 11:12:57 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:28.813 11:12:57 -- scripts/common.sh@352 -- # echo 3 00:25:28.813 11:12:57 -- scripts/common.sh@362 -- # ver1[v]=3 00:25:28.813 11:12:57 -- scripts/common.sh@363 -- # decimal 3 00:25:28.813 11:12:57 -- scripts/common.sh@350 -- # local d=3 00:25:28.813 11:12:57 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:28.813 11:12:57 -- scripts/common.sh@352 -- # echo 3 00:25:28.813 11:12:57 -- scripts/common.sh@363 -- # ver2[v]=3 00:25:28.813 11:12:57 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:28.813 11:12:57 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:28.814 11:12:57 -- scripts/common.sh@361 -- # (( v++ )) 00:25:28.814 11:12:57 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.814 11:12:57 -- scripts/common.sh@362 -- # decimal 0 00:25:28.814 11:12:57 -- scripts/common.sh@350 -- # local d=0 00:25:28.814 11:12:57 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:28.814 11:12:57 -- scripts/common.sh@352 -- # echo 0 00:25:28.814 11:12:57 -- scripts/common.sh@362 -- # ver1[v]=0 00:25:28.814 11:12:57 -- scripts/common.sh@363 -- # decimal 0 00:25:28.814 11:12:57 -- scripts/common.sh@350 -- # local d=0 00:25:28.814 11:12:57 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:28.814 11:12:57 -- scripts/common.sh@352 -- # echo 0 00:25:28.814 11:12:57 -- scripts/common.sh@363 -- # ver2[v]=0 00:25:28.814 11:12:57 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:28.814 11:12:57 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:28.814 11:12:57 -- scripts/common.sh@361 -- # (( v++ )) 00:25:28.814 11:12:57 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.814 11:12:57 -- scripts/common.sh@362 -- # decimal 9 00:25:28.814 11:12:57 -- scripts/common.sh@350 -- # local d=9 00:25:28.814 11:12:57 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:28.814 11:12:57 -- scripts/common.sh@352 -- # echo 9 00:25:28.814 11:12:57 -- scripts/common.sh@362 -- # ver1[v]=9 00:25:28.814 11:12:57 -- scripts/common.sh@363 -- # decimal 0 00:25:28.814 11:12:57 -- scripts/common.sh@350 -- # local d=0 00:25:28.814 11:12:57 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:28.814 11:12:57 -- scripts/common.sh@352 -- # echo 0 00:25:28.814 11:12:57 -- scripts/common.sh@363 -- # ver2[v]=0 00:25:28.814 11:12:57 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:28.814 11:12:57 -- scripts/common.sh@364 -- # return 0 00:25:28.814 11:12:57 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:28.814 11:12:57 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:28.814 11:12:57 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:28.814 11:12:57 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:28.814 11:12:57 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:28.814 11:12:57 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:28.814 11:12:57 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:28.814 11:12:57 -- fips/fips.sh@113 -- # build_openssl_config 00:25:28.814 11:12:57 -- fips/fips.sh@37 -- # cat 00:25:28.814 11:12:57 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:28.814 11:12:57 -- fips/fips.sh@58 -- # cat - 00:25:28.814 11:12:57 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:28.814 11:12:57 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:28.814 11:12:57 -- fips/fips.sh@116 -- # mapfile -t providers 00:25:28.814 11:12:57 -- fips/fips.sh@116 -- # openssl list -providers 00:25:28.814 11:12:57 -- fips/fips.sh@116 -- # grep name 00:25:28.814 11:12:57 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:28.814 11:12:57 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:28.814 11:12:57 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:28.814 11:12:57 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:28.814 11:12:57 -- common/autotest_common.sh@638 -- # local es=0 00:25:28.814 11:12:57 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:28.814 11:12:57 -- fips/fips.sh@127 -- # : 00:25:28.814 11:12:57 -- common/autotest_common.sh@626 -- # local arg=openssl 00:25:28.814 11:12:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:28.814 11:12:57 -- common/autotest_common.sh@630 -- # type -t openssl 00:25:28.814 11:12:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:28.814 11:12:57 -- common/autotest_common.sh@632 -- # type -P openssl 00:25:28.814 11:12:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:28.814 11:12:57 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:25:28.814 11:12:57 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:25:28.814 11:12:57 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:25:28.814 Error setting digest 00:25:28.814 00C282934D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:28.814 00C282934D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:28.814 11:12:57 -- common/autotest_common.sh@641 -- # es=1 00:25:28.814 11:12:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:28.814 11:12:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:28.814 11:12:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:28.814 11:12:57 -- fips/fips.sh@130 -- # nvmftestinit 00:25:28.814 11:12:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:28.814 11:12:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.814 11:12:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:28.814 11:12:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:28.814 11:12:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:28.814 11:12:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.814 11:12:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.814 11:12:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.814 11:12:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:28.814 11:12:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:28.814 11:12:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:28.814 11:12:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:28.814 11:12:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:28.814 11:12:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:28.814 11:12:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.814 11:12:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.814 11:12:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:28.814 11:12:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:28.814 11:12:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.814 11:12:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.814 11:12:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.814 11:12:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.814 11:12:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.814 11:12:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.814 11:12:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.814 11:12:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.814 11:12:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:28.814 11:12:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:28.814 Cannot find device "nvmf_tgt_br" 00:25:28.814 11:12:57 -- nvmf/common.sh@155 -- # true 00:25:28.814 11:12:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:29.073 Cannot find device "nvmf_tgt_br2" 00:25:29.073 11:12:57 -- nvmf/common.sh@156 -- # true 00:25:29.073 11:12:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:29.073 11:12:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:29.073 Cannot find device "nvmf_tgt_br" 00:25:29.073 11:12:57 -- nvmf/common.sh@158 -- # true 00:25:29.073 11:12:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:29.073 Cannot find device "nvmf_tgt_br2" 00:25:29.073 11:12:57 -- nvmf/common.sh@159 -- # true 00:25:29.073 11:12:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:29.073 11:12:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:29.073 11:12:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:29.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.073 11:12:57 -- nvmf/common.sh@162 -- # true 00:25:29.073 11:12:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:29.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.073 11:12:57 -- nvmf/common.sh@163 -- # true 00:25:29.073 11:12:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:29.073 11:12:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:29.073 11:12:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:29.073 11:12:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:29.073 11:12:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:29.073 11:12:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:29.073 11:12:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:29.073 11:12:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:29.073 11:12:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:29.073 11:12:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:29.073 11:12:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:29.073 11:12:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:29.073 11:12:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:29.073 11:12:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:29.073 11:12:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:29.073 11:12:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:29.074 11:12:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:29.074 11:12:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:29.074 11:12:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:29.332 11:12:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:29.332 11:12:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:29.332 11:12:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:29.332 11:12:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.332 11:12:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:29.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:25:29.332 00:25:29.332 --- 10.0.0.2 ping statistics --- 00:25:29.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.332 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:29.332 11:12:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:29.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:25:29.332 00:25:29.332 --- 10.0.0.3 ping statistics --- 00:25:29.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.332 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:29.332 11:12:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:29.332 00:25:29.332 --- 10.0.0.1 ping statistics --- 00:25:29.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.332 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:29.332 11:12:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.332 11:12:57 -- nvmf/common.sh@422 -- # return 0 00:25:29.332 11:12:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:29.332 11:12:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.332 11:12:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:29.332 11:12:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:29.332 11:12:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.332 11:12:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:29.332 11:12:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:29.332 11:12:57 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:29.332 11:12:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:29.332 11:12:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:29.332 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:29.332 11:12:57 -- nvmf/common.sh@470 -- # nvmfpid=95477 00:25:29.332 11:12:57 -- nvmf/common.sh@471 -- # waitforlisten 95477 00:25:29.332 11:12:57 -- common/autotest_common.sh@817 -- # '[' -z 95477 ']' 00:25:29.332 11:12:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.332 11:12:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:29.332 11:12:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:29.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.332 11:12:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.332 11:12:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:29.332 11:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:29.332 [2024-04-18 11:12:57.864103] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:29.332 [2024-04-18 11:12:57.864201] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.590 [2024-04-18 11:12:57.998636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.590 [2024-04-18 11:12:58.095541] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.590 [2024-04-18 11:12:58.095605] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.590 [2024-04-18 11:12:58.095617] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.590 [2024-04-18 11:12:58.095626] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.590 [2024-04-18 11:12:58.095633] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.590 [2024-04-18 11:12:58.095664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.526 11:12:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:30.526 11:12:58 -- common/autotest_common.sh@850 -- # return 0 00:25:30.526 11:12:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:30.526 11:12:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:30.526 11:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:30.526 11:12:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.526 11:12:58 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:30.526 11:12:58 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:30.526 11:12:58 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:30.526 11:12:58 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:30.526 11:12:58 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:30.526 11:12:58 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:30.526 11:12:58 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:30.526 11:12:58 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.526 [2024-04-18 11:12:59.072964] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.526 [2024-04-18 11:12:59.088947] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.526 [2024-04-18 11:12:59.089204] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.526 [2024-04-18 11:12:59.120357] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:30.526 malloc0 00:25:30.526 11:12:59 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:30.526 11:12:59 -- fips/fips.sh@147 -- # bdevperf_pid=95529 00:25:30.526 11:12:59 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:30.526 11:12:59 -- fips/fips.sh@148 -- # waitforlisten 95529 /var/tmp/bdevperf.sock 00:25:30.526 11:12:59 -- common/autotest_common.sh@817 -- # '[' -z 95529 ']' 00:25:30.526 11:12:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.526 11:12:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:30.526 11:12:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.526 11:12:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:30.526 11:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:30.784 [2024-04-18 11:12:59.216783] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:30.784 [2024-04-18 11:12:59.216887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95529 ] 00:25:30.784 [2024-04-18 11:12:59.349156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.042 [2024-04-18 11:12:59.457392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.637 11:13:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:31.637 11:13:00 -- common/autotest_common.sh@850 -- # return 0 00:25:31.637 11:13:00 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:31.896 [2024-04-18 11:13:00.358139] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:31.896 [2024-04-18 11:13:00.358257] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:31.896 TLSTESTn1 00:25:31.896 11:13:00 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:32.153 Running I/O for 10 seconds... 00:25:42.138 00:25:42.138 Latency(us) 00:25:42.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.138 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:42.138 Verification LBA range: start 0x0 length 0x2000 00:25:42.138 TLSTESTn1 : 10.02 3846.67 15.03 0.00 0.00 33210.79 7000.44 35031.97 00:25:42.138 =================================================================================================================== 00:25:42.138 Total : 3846.67 15.03 0.00 0.00 33210.79 7000.44 35031.97 00:25:42.138 0 00:25:42.138 11:13:10 -- fips/fips.sh@1 -- # cleanup 00:25:42.138 11:13:10 -- fips/fips.sh@15 -- # process_shm --id 0 00:25:42.138 11:13:10 -- common/autotest_common.sh@794 -- # type=--id 00:25:42.138 11:13:10 -- common/autotest_common.sh@795 -- # id=0 00:25:42.138 11:13:10 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:25:42.138 11:13:10 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:42.138 11:13:10 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:25:42.138 11:13:10 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:25:42.138 11:13:10 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:25:42.138 11:13:10 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:42.138 nvmf_trace.0 00:25:42.138 11:13:10 -- common/autotest_common.sh@809 -- # return 0 00:25:42.138 11:13:10 -- fips/fips.sh@16 -- # killprocess 95529 00:25:42.138 11:13:10 -- common/autotest_common.sh@936 -- # '[' -z 95529 ']' 00:25:42.138 11:13:10 -- common/autotest_common.sh@940 -- # kill -0 95529 00:25:42.138 11:13:10 -- common/autotest_common.sh@941 -- # uname 00:25:42.138 11:13:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.138 11:13:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95529 00:25:42.138 11:13:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:42.138 11:13:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:42.138 11:13:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95529' 00:25:42.138 killing process with pid 95529 00:25:42.138 11:13:10 -- common/autotest_common.sh@955 -- # kill 95529 00:25:42.138 Received shutdown signal, test time was about 10.000000 seconds 00:25:42.138 00:25:42.138 Latency(us) 00:25:42.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.139 =================================================================================================================== 00:25:42.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.139 [2024-04-18 11:13:10.747569] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:42.139 11:13:10 -- common/autotest_common.sh@960 -- # wait 95529 00:25:42.397 11:13:10 -- fips/fips.sh@17 -- # nvmftestfini 00:25:42.397 11:13:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:42.397 11:13:10 -- nvmf/common.sh@117 -- # sync 00:25:42.397 11:13:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.397 11:13:11 -- nvmf/common.sh@120 -- # set +e 00:25:42.397 11:13:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.397 11:13:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.397 rmmod nvme_tcp 00:25:42.397 rmmod nvme_fabrics 00:25:42.655 rmmod nvme_keyring 00:25:42.655 11:13:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.655 11:13:11 -- nvmf/common.sh@124 -- # set -e 00:25:42.655 11:13:11 -- nvmf/common.sh@125 -- # return 0 00:25:42.655 11:13:11 -- nvmf/common.sh@478 -- # '[' -n 95477 ']' 00:25:42.655 11:13:11 -- nvmf/common.sh@479 -- # killprocess 95477 00:25:42.655 11:13:11 -- common/autotest_common.sh@936 -- # '[' -z 95477 ']' 00:25:42.655 11:13:11 -- common/autotest_common.sh@940 -- # kill -0 95477 00:25:42.655 11:13:11 -- common/autotest_common.sh@941 -- # uname 00:25:42.655 11:13:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.655 11:13:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95477 00:25:42.655 11:13:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:42.655 killing process with pid 95477 00:25:42.655 11:13:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:42.655 11:13:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95477' 00:25:42.655 11:13:11 -- common/autotest_common.sh@955 -- # kill 95477 00:25:42.655 [2024-04-18 11:13:11.080849] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:42.656 11:13:11 -- common/autotest_common.sh@960 -- # wait 95477 00:25:42.914 11:13:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:42.914 11:13:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:42.914 11:13:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:42.914 11:13:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.914 11:13:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.914 11:13:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.914 11:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.914 11:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.914 11:13:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:42.914 11:13:11 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:42.914 00:25:42.914 real 0m14.181s 00:25:42.914 user 0m19.298s 00:25:42.914 sys 0m5.675s 00:25:42.914 11:13:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:42.914 ************************************ 00:25:42.914 END TEST nvmf_fips 00:25:42.914 ************************************ 00:25:42.914 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:25:42.914 11:13:11 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:25:42.914 11:13:11 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.914 11:13:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:42.914 11:13:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:42.914 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:25:42.914 ************************************ 00:25:42.914 START TEST nvmf_fuzz 00:25:42.914 ************************************ 00:25:42.914 11:13:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.914 * Looking for test storage... 00:25:42.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:42.914 11:13:11 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:42.914 11:13:11 -- nvmf/common.sh@7 -- # uname -s 00:25:42.914 11:13:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.914 11:13:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.914 11:13:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.914 11:13:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.914 11:13:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.914 11:13:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.914 11:13:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.914 11:13:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.914 11:13:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.914 11:13:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.914 11:13:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:25:42.914 11:13:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:25:42.914 11:13:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.914 11:13:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.914 11:13:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:42.914 11:13:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.914 11:13:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:42.915 11:13:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.915 11:13:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.915 11:13:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.915 11:13:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.915 11:13:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.915 11:13:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.915 11:13:11 -- paths/export.sh@5 -- # export PATH 00:25:42.915 11:13:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.915 11:13:11 -- nvmf/common.sh@47 -- # : 0 00:25:42.915 11:13:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.915 11:13:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.915 11:13:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.915 11:13:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.915 11:13:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.915 11:13:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.915 11:13:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.915 11:13:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.915 11:13:11 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:42.915 11:13:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:42.915 11:13:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.915 11:13:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:42.915 11:13:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:42.915 11:13:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:42.915 11:13:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.915 11:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.173 11:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.173 11:13:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:43.173 11:13:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:43.173 11:13:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:43.173 11:13:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:43.173 11:13:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:43.173 11:13:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:43.173 11:13:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.174 11:13:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.174 11:13:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:43.174 11:13:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:43.174 11:13:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:43.174 11:13:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:43.174 11:13:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:43.174 11:13:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.174 11:13:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:43.174 11:13:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:43.174 11:13:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:43.174 11:13:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:43.174 11:13:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:43.174 11:13:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:43.174 Cannot find device "nvmf_tgt_br" 00:25:43.174 11:13:11 -- nvmf/common.sh@155 -- # true 00:25:43.174 11:13:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.174 Cannot find device "nvmf_tgt_br2" 00:25:43.174 11:13:11 -- nvmf/common.sh@156 -- # true 00:25:43.174 11:13:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:43.174 11:13:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:43.174 Cannot find device "nvmf_tgt_br" 00:25:43.174 11:13:11 -- nvmf/common.sh@158 -- # true 00:25:43.174 11:13:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:43.174 Cannot find device "nvmf_tgt_br2" 00:25:43.174 11:13:11 -- nvmf/common.sh@159 -- # true 00:25:43.174 11:13:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:43.174 11:13:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:43.174 11:13:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:43.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.174 11:13:11 -- nvmf/common.sh@162 -- # true 00:25:43.174 11:13:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:43.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.174 11:13:11 -- nvmf/common.sh@163 -- # true 00:25:43.174 11:13:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:43.174 11:13:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:43.174 11:13:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:43.174 11:13:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:43.174 11:13:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:43.174 11:13:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:43.174 11:13:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:43.174 11:13:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:43.174 11:13:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:43.174 11:13:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:43.174 11:13:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:43.174 11:13:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:43.174 11:13:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:43.174 11:13:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:43.174 11:13:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:43.174 11:13:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:43.174 11:13:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:43.174 11:13:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:43.174 11:13:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:43.432 11:13:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:43.432 11:13:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:43.432 11:13:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:43.432 11:13:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:43.432 11:13:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:43.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:25:43.432 00:25:43.432 --- 10.0.0.2 ping statistics --- 00:25:43.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.432 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:43.432 11:13:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:43.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:43.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:43.432 00:25:43.432 --- 10.0.0.3 ping statistics --- 00:25:43.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.432 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:43.432 11:13:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:43.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:43.432 00:25:43.432 --- 10.0.0.1 ping statistics --- 00:25:43.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.432 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:43.432 11:13:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.432 11:13:11 -- nvmf/common.sh@422 -- # return 0 00:25:43.432 11:13:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:43.432 11:13:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.432 11:13:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:43.432 11:13:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:43.432 11:13:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.432 11:13:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:43.432 11:13:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:43.432 11:13:11 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=95881 00:25:43.432 11:13:11 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:43.432 11:13:11 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:43.432 11:13:11 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 95881 00:25:43.432 11:13:11 -- common/autotest_common.sh@817 -- # '[' -z 95881 ']' 00:25:43.433 11:13:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.433 11:13:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:43.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.433 11:13:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.433 11:13:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:43.433 11:13:11 -- common/autotest_common.sh@10 -- # set +x 00:25:44.368 11:13:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:44.368 11:13:12 -- common/autotest_common.sh@850 -- # return 0 00:25:44.368 11:13:12 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:44.368 11:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.368 11:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:44.368 11:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.368 11:13:12 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:44.368 11:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.368 11:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:44.368 Malloc0 00:25:44.368 11:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.368 11:13:12 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:44.368 11:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.368 11:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:44.368 11:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.368 11:13:12 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:44.368 11:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.368 11:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:44.368 11:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.368 11:13:12 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.368 11:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.368 11:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:44.368 11:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.368 11:13:12 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:44.368 11:13:12 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:44.949 Shutting down the fuzz application 00:25:44.949 11:13:13 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:45.207 Shutting down the fuzz application 00:25:45.207 11:13:13 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.207 11:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.207 11:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:45.207 11:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.207 11:13:13 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:45.207 11:13:13 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:45.207 11:13:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:45.207 11:13:13 -- nvmf/common.sh@117 -- # sync 00:25:45.207 11:13:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:45.207 11:13:13 -- nvmf/common.sh@120 -- # set +e 00:25:45.207 11:13:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:45.207 11:13:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:45.207 rmmod nvme_tcp 00:25:45.207 rmmod nvme_fabrics 00:25:45.207 rmmod nvme_keyring 00:25:45.207 11:13:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:45.207 11:13:13 -- nvmf/common.sh@124 -- # set -e 00:25:45.207 11:13:13 -- nvmf/common.sh@125 -- # return 0 00:25:45.207 11:13:13 -- nvmf/common.sh@478 -- # '[' -n 95881 ']' 00:25:45.207 11:13:13 -- nvmf/common.sh@479 -- # killprocess 95881 00:25:45.207 11:13:13 -- common/autotest_common.sh@936 -- # '[' -z 95881 ']' 00:25:45.207 11:13:13 -- common/autotest_common.sh@940 -- # kill -0 95881 00:25:45.207 11:13:13 -- common/autotest_common.sh@941 -- # uname 00:25:45.207 11:13:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:45.208 11:13:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95881 00:25:45.208 11:13:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:45.208 killing process with pid 95881 00:25:45.208 11:13:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:45.208 11:13:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95881' 00:25:45.208 11:13:13 -- common/autotest_common.sh@955 -- # kill 95881 00:25:45.208 11:13:13 -- common/autotest_common.sh@960 -- # wait 95881 00:25:45.466 11:13:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:45.466 11:13:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:45.466 11:13:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:45.466 11:13:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.466 11:13:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:45.466 11:13:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.466 11:13:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.466 11:13:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.466 11:13:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:45.466 11:13:14 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:25:45.725 00:25:45.725 real 0m2.662s 00:25:45.725 user 0m2.816s 00:25:45.725 sys 0m0.638s 00:25:45.725 ************************************ 00:25:45.725 END TEST nvmf_fuzz 00:25:45.725 ************************************ 00:25:45.725 11:13:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:45.725 11:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:45.725 11:13:14 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:45.725 11:13:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:45.725 11:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:45.725 11:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:45.725 ************************************ 00:25:45.725 START TEST nvmf_multiconnection 00:25:45.725 ************************************ 00:25:45.725 11:13:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:45.725 * Looking for test storage... 00:25:45.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:45.725 11:13:14 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.725 11:13:14 -- nvmf/common.sh@7 -- # uname -s 00:25:45.725 11:13:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.725 11:13:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.725 11:13:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.725 11:13:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.725 11:13:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.725 11:13:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.725 11:13:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.725 11:13:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.725 11:13:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.725 11:13:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.725 11:13:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:25:45.725 11:13:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:25:45.725 11:13:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.725 11:13:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.725 11:13:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:45.725 11:13:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.725 11:13:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.725 11:13:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.725 11:13:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.725 11:13:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.725 11:13:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.725 11:13:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.725 11:13:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.725 11:13:14 -- paths/export.sh@5 -- # export PATH 00:25:45.725 11:13:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.725 11:13:14 -- nvmf/common.sh@47 -- # : 0 00:25:45.725 11:13:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.725 11:13:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.725 11:13:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.725 11:13:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.725 11:13:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.725 11:13:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.725 11:13:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.725 11:13:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.725 11:13:14 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:45.725 11:13:14 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:45.725 11:13:14 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:45.725 11:13:14 -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:45.725 11:13:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:45.725 11:13:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.725 11:13:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:45.725 11:13:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:45.725 11:13:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:45.725 11:13:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.725 11:13:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.725 11:13:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.725 11:13:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:45.725 11:13:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:45.725 11:13:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:45.725 11:13:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:45.725 11:13:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:45.725 11:13:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:45.725 11:13:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.725 11:13:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.725 11:13:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:45.725 11:13:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:45.725 11:13:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:45.725 11:13:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:45.725 11:13:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:45.725 11:13:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.725 11:13:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:45.725 11:13:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:45.725 11:13:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:45.725 11:13:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:45.725 11:13:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:45.725 11:13:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:45.984 Cannot find device "nvmf_tgt_br" 00:25:45.984 11:13:14 -- nvmf/common.sh@155 -- # true 00:25:45.984 11:13:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:45.984 Cannot find device "nvmf_tgt_br2" 00:25:45.984 11:13:14 -- nvmf/common.sh@156 -- # true 00:25:45.984 11:13:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:45.984 11:13:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:45.984 Cannot find device "nvmf_tgt_br" 00:25:45.984 11:13:14 -- nvmf/common.sh@158 -- # true 00:25:45.984 11:13:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:45.984 Cannot find device "nvmf_tgt_br2" 00:25:45.984 11:13:14 -- nvmf/common.sh@159 -- # true 00:25:45.984 11:13:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:45.984 11:13:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:45.984 11:13:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:45.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.984 11:13:14 -- nvmf/common.sh@162 -- # true 00:25:45.984 11:13:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:45.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.984 11:13:14 -- nvmf/common.sh@163 -- # true 00:25:45.984 11:13:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:45.984 11:13:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:45.984 11:13:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:45.984 11:13:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:45.984 11:13:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:45.984 11:13:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:45.984 11:13:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:45.984 11:13:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:45.984 11:13:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:45.984 11:13:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:45.984 11:13:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:45.984 11:13:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:45.984 11:13:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:45.984 11:13:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:45.984 11:13:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:45.984 11:13:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:45.984 11:13:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:45.984 11:13:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:45.984 11:13:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:45.984 11:13:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:46.242 11:13:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:46.242 11:13:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:46.242 11:13:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:46.242 11:13:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:46.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:25:46.242 00:25:46.242 --- 10.0.0.2 ping statistics --- 00:25:46.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.242 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:46.242 11:13:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:46.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:46.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:25:46.242 00:25:46.242 --- 10.0.0.3 ping statistics --- 00:25:46.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.242 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:46.242 11:13:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:46.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:46.242 00:25:46.242 --- 10.0.0.1 ping statistics --- 00:25:46.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.242 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:46.242 11:13:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.242 11:13:14 -- nvmf/common.sh@422 -- # return 0 00:25:46.242 11:13:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:46.242 11:13:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.242 11:13:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:46.242 11:13:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:46.242 11:13:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.242 11:13:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:46.242 11:13:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:46.242 11:13:14 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:46.242 11:13:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:46.242 11:13:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:46.242 11:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:46.242 11:13:14 -- nvmf/common.sh@470 -- # nvmfpid=96094 00:25:46.242 11:13:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.242 11:13:14 -- nvmf/common.sh@471 -- # waitforlisten 96094 00:25:46.242 11:13:14 -- common/autotest_common.sh@817 -- # '[' -z 96094 ']' 00:25:46.242 11:13:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.242 11:13:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:46.242 11:13:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.242 11:13:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:46.242 11:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:46.242 [2024-04-18 11:13:14.739452] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:46.242 [2024-04-18 11:13:14.739843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.242 [2024-04-18 11:13:14.880324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.500 [2024-04-18 11:13:14.984951] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.500 [2024-04-18 11:13:14.985287] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.500 [2024-04-18 11:13:14.985509] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.500 [2024-04-18 11:13:14.985651] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.500 [2024-04-18 11:13:14.985692] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.500 [2024-04-18 11:13:14.985912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.500 [2024-04-18 11:13:14.986144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.500 [2024-04-18 11:13:14.986071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.500 [2024-04-18 11:13:14.986143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.435 11:13:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:47.435 11:13:15 -- common/autotest_common.sh@850 -- # return 0 00:25:47.435 11:13:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:47.435 11:13:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.435 11:13:15 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 [2024-04-18 11:13:15.778934] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@21 -- # seq 1 11 00:25:47.435 11:13:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.435 11:13:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 Malloc1 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 [2024-04-18 11:13:15.844712] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.435 11:13:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 Malloc2 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.435 11:13:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 Malloc3 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.435 11:13:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 Malloc4 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.435 11:13:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:47.435 11:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:15 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 Malloc5 00:25:47.435 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:47.435 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:47.435 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:47.435 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.435 11:13:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:47.435 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 Malloc6 00:25:47.435 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 11:13:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:47.435 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.694 11:13:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 Malloc7 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.694 11:13:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 Malloc8 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:47.694 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.694 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.694 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.694 11:13:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.694 11:13:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 Malloc9 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.695 11:13:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 Malloc10 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.695 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.695 11:13:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.695 11:13:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:47.695 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.695 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.953 Malloc11 00:25:47.953 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.953 11:13:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:47.953 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.953 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.953 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.953 11:13:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:47.953 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.953 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.953 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.953 11:13:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:47.953 11:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.953 11:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:47.953 11:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.953 11:13:16 -- target/multiconnection.sh@28 -- # seq 1 11 00:25:47.953 11:13:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.953 11:13:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:47.953 11:13:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:47.953 11:13:16 -- common/autotest_common.sh@1184 -- # local i=0 00:25:47.953 11:13:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:47.953 11:13:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:47.953 11:13:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:50.479 11:13:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:50.479 11:13:18 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:25:50.479 11:13:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:50.479 11:13:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:25:50.479 11:13:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.479 11:13:18 -- common/autotest_common.sh@1194 -- # return 0 00:25:50.479 11:13:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.479 11:13:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:50.479 11:13:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:50.479 11:13:18 -- common/autotest_common.sh@1184 -- # local i=0 00:25:50.479 11:13:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.479 11:13:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:50.479 11:13:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:52.376 11:13:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:52.376 11:13:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:52.376 11:13:20 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:25:52.376 11:13:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:25:52.376 11:13:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:52.376 11:13:20 -- common/autotest_common.sh@1194 -- # return 0 00:25:52.376 11:13:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.376 11:13:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:52.376 11:13:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:52.376 11:13:20 -- common/autotest_common.sh@1184 -- # local i=0 00:25:52.376 11:13:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.376 11:13:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:52.376 11:13:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:54.327 11:13:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:54.327 11:13:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:54.327 11:13:22 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:25:54.327 11:13:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:25:54.327 11:13:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.327 11:13:22 -- common/autotest_common.sh@1194 -- # return 0 00:25:54.327 11:13:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.327 11:13:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:54.585 11:13:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:54.585 11:13:23 -- common/autotest_common.sh@1184 -- # local i=0 00:25:54.585 11:13:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.585 11:13:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:54.585 11:13:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:56.485 11:13:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:56.485 11:13:25 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:25:56.485 11:13:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:56.485 11:13:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:25:56.485 11:13:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.485 11:13:25 -- common/autotest_common.sh@1194 -- # return 0 00:25:56.485 11:13:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.485 11:13:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:56.744 11:13:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:56.744 11:13:25 -- common/autotest_common.sh@1184 -- # local i=0 00:25:56.744 11:13:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.744 11:13:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:56.744 11:13:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:59.275 11:13:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:59.275 11:13:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:59.275 11:13:27 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:25:59.275 11:13:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:25:59.275 11:13:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.275 11:13:27 -- common/autotest_common.sh@1194 -- # return 0 00:25:59.275 11:13:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.275 11:13:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:59.275 11:13:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:59.275 11:13:27 -- common/autotest_common.sh@1184 -- # local i=0 00:25:59.275 11:13:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.275 11:13:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:59.275 11:13:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:26:01.176 11:13:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:26:01.176 11:13:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:26:01.176 11:13:29 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:26:01.176 11:13:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:26:01.176 11:13:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.176 11:13:29 -- common/autotest_common.sh@1194 -- # return 0 00:26:01.176 11:13:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.176 11:13:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:01.176 11:13:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:01.176 11:13:29 -- common/autotest_common.sh@1184 -- # local i=0 00:26:01.176 11:13:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.176 11:13:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:26:01.176 11:13:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:26:03.076 11:13:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:26:03.076 11:13:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:26:03.076 11:13:31 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:26:03.076 11:13:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:26:03.076 11:13:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.076 11:13:31 -- common/autotest_common.sh@1194 -- # return 0 00:26:03.076 11:13:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.076 11:13:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:03.334 11:13:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:03.334 11:13:31 -- common/autotest_common.sh@1184 -- # local i=0 00:26:03.334 11:13:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:26:03.334 11:13:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:26:03.334 11:13:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:26:05.279 11:13:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:26:05.279 11:13:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:26:05.279 11:13:33 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:26:05.279 11:13:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:26:05.279 11:13:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:26:05.279 11:13:33 -- common/autotest_common.sh@1194 -- # return 0 00:26:05.279 11:13:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.279 11:13:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:05.537 11:13:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:05.537 11:13:34 -- common/autotest_common.sh@1184 -- # local i=0 00:26:05.537 11:13:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.537 11:13:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:26:05.537 11:13:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:26:07.442 11:13:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:26:07.442 11:13:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:26:07.442 11:13:36 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:26:07.701 11:13:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:26:07.701 11:13:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.701 11:13:36 -- common/autotest_common.sh@1194 -- # return 0 00:26:07.701 11:13:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.701 11:13:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:07.701 11:13:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:07.701 11:13:36 -- common/autotest_common.sh@1184 -- # local i=0 00:26:07.701 11:13:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.701 11:13:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:26:07.701 11:13:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:26:09.655 11:13:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:26:09.655 11:13:38 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:26:09.655 11:13:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:26:09.920 11:13:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:26:09.920 11:13:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.920 11:13:38 -- common/autotest_common.sh@1194 -- # return 0 00:26:09.920 11:13:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.920 11:13:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:09.920 11:13:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:09.920 11:13:38 -- common/autotest_common.sh@1184 -- # local i=0 00:26:09.920 11:13:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.920 11:13:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:26:09.920 11:13:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:26:12.449 11:13:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:26:12.449 11:13:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:26:12.449 11:13:40 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:26:12.449 11:13:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:26:12.449 11:13:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.449 11:13:40 -- common/autotest_common.sh@1194 -- # return 0 00:26:12.449 11:13:40 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:12.449 [global] 00:26:12.449 thread=1 00:26:12.449 invalidate=1 00:26:12.449 rw=read 00:26:12.449 time_based=1 00:26:12.449 runtime=10 00:26:12.449 ioengine=libaio 00:26:12.449 direct=1 00:26:12.449 bs=262144 00:26:12.449 iodepth=64 00:26:12.449 norandommap=1 00:26:12.449 numjobs=1 00:26:12.449 00:26:12.449 [job0] 00:26:12.449 filename=/dev/nvme0n1 00:26:12.449 [job1] 00:26:12.449 filename=/dev/nvme10n1 00:26:12.449 [job2] 00:26:12.449 filename=/dev/nvme1n1 00:26:12.449 [job3] 00:26:12.449 filename=/dev/nvme2n1 00:26:12.449 [job4] 00:26:12.449 filename=/dev/nvme3n1 00:26:12.449 [job5] 00:26:12.449 filename=/dev/nvme4n1 00:26:12.449 [job6] 00:26:12.449 filename=/dev/nvme5n1 00:26:12.449 [job7] 00:26:12.449 filename=/dev/nvme6n1 00:26:12.449 [job8] 00:26:12.449 filename=/dev/nvme7n1 00:26:12.449 [job9] 00:26:12.449 filename=/dev/nvme8n1 00:26:12.449 [job10] 00:26:12.449 filename=/dev/nvme9n1 00:26:12.449 Could not set queue depth (nvme0n1) 00:26:12.449 Could not set queue depth (nvme10n1) 00:26:12.449 Could not set queue depth (nvme1n1) 00:26:12.449 Could not set queue depth (nvme2n1) 00:26:12.449 Could not set queue depth (nvme3n1) 00:26:12.449 Could not set queue depth (nvme4n1) 00:26:12.449 Could not set queue depth (nvme5n1) 00:26:12.449 Could not set queue depth (nvme6n1) 00:26:12.449 Could not set queue depth (nvme7n1) 00:26:12.449 Could not set queue depth (nvme8n1) 00:26:12.449 Could not set queue depth (nvme9n1) 00:26:12.449 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:12.449 fio-3.35 00:26:12.449 Starting 11 threads 00:26:24.654 00:26:24.654 job0: (groupid=0, jobs=1): err= 0: pid=96566: Thu Apr 18 11:13:51 2024 00:26:24.654 read: IOPS=1127, BW=282MiB/s (296MB/s)(2831MiB/10041msec) 00:26:24.654 slat (usec): min=17, max=36142, avg=878.45, stdev=3282.70 00:26:24.654 clat (msec): min=4, max=100, avg=55.75, stdev=11.65 00:26:24.654 lat (msec): min=4, max=100, avg=56.63, stdev=11.96 00:26:24.654 clat percentiles (msec): 00:26:24.654 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 42], 20.00th=[ 48], 00:26:24.654 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 60], 00:26:24.654 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 69], 95.00th=[ 72], 00:26:24.654 | 99.00th=[ 80], 99.50th=[ 82], 99.90th=[ 90], 99.95th=[ 101], 00:26:24.654 | 99.99th=[ 101] 00:26:24.654 bw ( KiB/s): min=260096, max=481280, per=15.71%, avg=288226.70, stdev=47207.06, samples=20 00:26:24.654 iops : min= 1016, max= 1880, avg=1125.80, stdev=184.43, samples=20 00:26:24.654 lat (msec) : 10=0.15%, 20=0.84%, 50=23.23%, 100=75.73%, 250=0.05% 00:26:24.654 cpu : usr=0.45%, sys=3.48%, ctx=2128, majf=0, minf=4097 00:26:24.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:24.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.654 issued rwts: total=11324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.654 job1: (groupid=0, jobs=1): err= 0: pid=96567: Thu Apr 18 11:13:51 2024 00:26:24.654 read: IOPS=500, BW=125MiB/s (131MB/s)(1262MiB/10087msec) 00:26:24.654 slat (usec): min=16, max=74699, avg=1934.96, stdev=6477.21 00:26:24.654 clat (usec): min=753, max=209781, avg=125686.71, stdev=35930.69 00:26:24.654 lat (usec): min=780, max=214952, avg=127621.67, stdev=36941.45 00:26:24.654 clat percentiles (msec): 00:26:24.654 | 1.00th=[ 14], 5.00th=[ 73], 10.00th=[ 86], 20.00th=[ 94], 00:26:24.654 | 30.00th=[ 101], 40.00th=[ 123], 50.00th=[ 138], 60.00th=[ 146], 00:26:24.654 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 00:26:24.654 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 207], 99.95th=[ 209], 00:26:24.654 | 99.99th=[ 211] 00:26:24.654 bw ( KiB/s): min=94208, max=229940, per=6.95%, avg=127580.95, stdev=36446.90, samples=20 00:26:24.654 iops : min= 368, max= 898, avg=498.20, stdev=142.27, samples=20 00:26:24.654 lat (usec) : 1000=0.02% 00:26:24.654 lat (msec) : 2=0.77%, 4=0.12%, 20=0.85%, 50=2.14%, 100=25.33% 00:26:24.654 lat (msec) : 250=70.77% 00:26:24.654 cpu : usr=0.19%, sys=1.81%, ctx=1042, majf=0, minf=4097 00:26:24.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:24.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.654 issued rwts: total=5049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.654 job2: (groupid=0, jobs=1): err= 0: pid=96568: Thu Apr 18 11:13:51 2024 00:26:24.654 read: IOPS=612, BW=153MiB/s (161MB/s)(1544MiB/10074msec) 00:26:24.654 slat (usec): min=17, max=118079, avg=1614.13, stdev=7172.46 00:26:24.654 clat (msec): min=16, max=235, avg=102.62, stdev=42.35 00:26:24.654 lat (msec): min=16, max=247, avg=104.24, stdev=43.48 00:26:24.654 clat percentiles (msec): 00:26:24.654 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 77], 00:26:24.654 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 100], 60.00th=[ 126], 00:26:24.654 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 157], 00:26:24.654 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 207], 99.95th=[ 220], 00:26:24.654 | 99.99th=[ 236] 00:26:24.654 bw ( KiB/s): min=98304, max=476672, per=8.52%, avg=156386.10, stdev=82044.73, samples=20 00:26:24.654 iops : min= 384, max= 1862, avg=610.80, stdev=320.50, samples=20 00:26:24.654 lat (msec) : 20=0.52%, 50=18.27%, 100=33.06%, 250=48.15% 00:26:24.654 cpu : usr=0.14%, sys=1.94%, ctx=1337, majf=0, minf=4097 00:26:24.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:24.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.654 issued rwts: total=6174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.654 job3: (groupid=0, jobs=1): err= 0: pid=96569: Thu Apr 18 11:13:51 2024 00:26:24.654 read: IOPS=516, BW=129MiB/s (135MB/s)(1299MiB/10067msec) 00:26:24.654 slat (usec): min=17, max=109857, avg=1892.53, stdev=7443.09 00:26:24.654 clat (msec): min=34, max=238, avg=121.86, stdev=27.66 00:26:24.654 lat (msec): min=34, max=244, avg=123.75, stdev=28.87 00:26:24.654 clat percentiles (msec): 00:26:24.654 | 1.00th=[ 72], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 92], 00:26:24.654 | 30.00th=[ 97], 40.00th=[ 108], 50.00th=[ 131], 60.00th=[ 138], 00:26:24.654 | 70.00th=[ 144], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 161], 00:26:24.654 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 180], 99.95th=[ 203], 00:26:24.654 | 99.99th=[ 239] 00:26:24.654 bw ( KiB/s): min=96768, max=180886, per=7.16%, avg=131363.90, stdev=29703.18, samples=20 00:26:24.654 iops : min= 378, max= 706, avg=513.00, stdev=115.93, samples=20 00:26:24.654 lat (msec) : 50=0.19%, 100=34.65%, 250=65.15% 00:26:24.655 cpu : usr=0.20%, sys=1.91%, ctx=988, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=5197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 job4: (groupid=0, jobs=1): err= 0: pid=96570: Thu Apr 18 11:13:51 2024 00:26:24.655 read: IOPS=560, BW=140MiB/s (147MB/s)(1413MiB/10091msec) 00:26:24.655 slat (usec): min=17, max=126624, avg=1738.93, stdev=6408.29 00:26:24.655 clat (msec): min=71, max=268, avg=112.34, stdev=18.58 00:26:24.655 lat (msec): min=71, max=273, avg=114.07, stdev=19.62 00:26:24.655 clat percentiles (msec): 00:26:24.655 | 1.00th=[ 78], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 95], 00:26:24.655 | 30.00th=[ 103], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 116], 00:26:24.655 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 134], 95.00th=[ 144], 00:26:24.655 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 268], 00:26:24.655 | 99.99th=[ 271] 00:26:24.655 bw ( KiB/s): min=95422, max=179712, per=7.80%, avg=143062.60, stdev=19724.26, samples=20 00:26:24.655 iops : min= 372, max= 702, avg=558.75, stdev=77.12, samples=20 00:26:24.655 lat (msec) : 100=26.79%, 250=73.16%, 500=0.05% 00:26:24.655 cpu : usr=0.25%, sys=1.75%, ctx=1150, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=5651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 job5: (groupid=0, jobs=1): err= 0: pid=96571: Thu Apr 18 11:13:51 2024 00:26:24.655 read: IOPS=596, BW=149MiB/s (156MB/s)(1506MiB/10089msec) 00:26:24.655 slat (usec): min=17, max=68466, avg=1656.30, stdev=5748.59 00:26:24.655 clat (msec): min=26, max=210, avg=105.33, stdev=22.09 00:26:24.655 lat (msec): min=28, max=210, avg=106.99, stdev=22.94 00:26:24.655 clat percentiles (msec): 00:26:24.655 | 1.00th=[ 47], 5.00th=[ 63], 10.00th=[ 77], 20.00th=[ 88], 00:26:24.655 | 30.00th=[ 95], 40.00th=[ 104], 50.00th=[ 110], 60.00th=[ 114], 00:26:24.655 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 138], 00:26:24.655 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 176], 99.95th=[ 180], 00:26:24.655 | 99.99th=[ 211] 00:26:24.655 bw ( KiB/s): min=118784, max=241664, per=8.31%, avg=152469.60, stdev=28665.84, samples=20 00:26:24.655 iops : min= 464, max= 944, avg=595.40, stdev=111.94, samples=20 00:26:24.655 lat (msec) : 50=1.35%, 100=34.66%, 250=64.00% 00:26:24.655 cpu : usr=0.18%, sys=2.10%, ctx=1236, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=6022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 job6: (groupid=0, jobs=1): err= 0: pid=96572: Thu Apr 18 11:13:51 2024 00:26:24.655 read: IOPS=546, BW=137MiB/s (143MB/s)(1373MiB/10049msec) 00:26:24.655 slat (usec): min=13, max=84450, avg=1773.41, stdev=6861.22 00:26:24.655 clat (msec): min=27, max=222, avg=115.14, stdev=36.38 00:26:24.655 lat (msec): min=28, max=231, avg=116.92, stdev=37.46 00:26:24.655 clat percentiles (msec): 00:26:24.655 | 1.00th=[ 48], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 82], 00:26:24.655 | 30.00th=[ 90], 40.00th=[ 96], 50.00th=[ 129], 60.00th=[ 140], 00:26:24.655 | 70.00th=[ 144], 80.00th=[ 150], 90.00th=[ 157], 95.00th=[ 161], 00:26:24.655 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 211], 99.95th=[ 213], 00:26:24.655 | 99.99th=[ 222] 00:26:24.655 bw ( KiB/s): min=95422, max=270336, per=7.57%, avg=138903.70, stdev=48980.30, samples=20 00:26:24.655 iops : min= 372, max= 1056, avg=542.55, stdev=191.35, samples=20 00:26:24.655 lat (msec) : 50=1.48%, 100=42.07%, 250=56.46% 00:26:24.655 cpu : usr=0.21%, sys=1.88%, ctx=937, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=5491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 job7: (groupid=0, jobs=1): err= 0: pid=96573: Thu Apr 18 11:13:51 2024 00:26:24.655 read: IOPS=633, BW=158MiB/s (166MB/s)(1591MiB/10049msec) 00:26:24.655 slat (usec): min=12, max=79710, avg=1550.47, stdev=5274.11 00:26:24.655 clat (msec): min=12, max=178, avg=99.37, stdev=28.20 00:26:24.655 lat (msec): min=12, max=186, avg=100.92, stdev=28.97 00:26:24.655 clat percentiles (msec): 00:26:24.655 | 1.00th=[ 24], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 70], 00:26:24.655 | 30.00th=[ 86], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 112], 00:26:24.655 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 131], 95.00th=[ 140], 00:26:24.655 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 178], 00:26:24.655 | 99.99th=[ 180] 00:26:24.655 bw ( KiB/s): min=111616, max=253952, per=8.79%, avg=161276.85, stdev=43583.75, samples=20 00:26:24.655 iops : min= 436, max= 992, avg=629.95, stdev=170.20, samples=20 00:26:24.655 lat (msec) : 20=0.61%, 50=3.19%, 100=43.44%, 250=52.76% 00:26:24.655 cpu : usr=0.22%, sys=2.27%, ctx=1313, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=6363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 job8: (groupid=0, jobs=1): err= 0: pid=96574: Thu Apr 18 11:13:51 2024 00:26:24.655 read: IOPS=1032, BW=258MiB/s (271MB/s)(2591MiB/10037msec) 00:26:24.655 slat (usec): min=17, max=51576, avg=951.79, stdev=3533.78 00:26:24.655 clat (msec): min=30, max=126, avg=60.93, stdev=10.61 00:26:24.655 lat (msec): min=30, max=140, avg=61.88, stdev=10.98 00:26:24.655 clat percentiles (msec): 00:26:24.655 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 53], 00:26:24.655 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:26:24.655 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 78], 00:26:24.655 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 122], 99.95th=[ 122], 00:26:24.655 | 99.99th=[ 127] 00:26:24.655 bw ( KiB/s): min=157184, max=294400, per=14.37%, avg=263600.30, stdev=28307.14, samples=20 00:26:24.655 iops : min= 614, max= 1150, avg=1029.55, stdev=110.56, samples=20 00:26:24.655 lat (msec) : 50=12.11%, 100=87.30%, 250=0.59% 00:26:24.655 cpu : usr=0.54%, sys=3.32%, ctx=1907, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=10365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 job9: (groupid=0, jobs=1): err= 0: pid=96575: Thu Apr 18 11:13:51 2024 00:26:24.655 read: IOPS=554, BW=139MiB/s (145MB/s)(1399MiB/10092msec) 00:26:24.655 slat (usec): min=15, max=57432, avg=1767.27, stdev=5899.45 00:26:24.655 clat (msec): min=24, max=227, avg=113.47, stdev=20.85 00:26:24.655 lat (msec): min=24, max=227, avg=115.23, stdev=21.72 00:26:24.655 clat percentiles (msec): 00:26:24.655 | 1.00th=[ 54], 5.00th=[ 84], 10.00th=[ 89], 20.00th=[ 95], 00:26:24.655 | 30.00th=[ 102], 40.00th=[ 110], 50.00th=[ 116], 60.00th=[ 121], 00:26:24.655 | 70.00th=[ 125], 80.00th=[ 130], 90.00th=[ 136], 95.00th=[ 146], 00:26:24.655 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 228], 99.95th=[ 228], 00:26:24.655 | 99.99th=[ 228] 00:26:24.655 bw ( KiB/s): min=111616, max=183296, per=7.72%, avg=141618.00, stdev=20253.68, samples=20 00:26:24.655 iops : min= 436, max= 716, avg=553.15, stdev=79.10, samples=20 00:26:24.655 lat (msec) : 50=0.55%, 100=27.94%, 250=71.51% 00:26:24.655 cpu : usr=0.17%, sys=1.98%, ctx=1147, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=5595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 job10: (groupid=0, jobs=1): err= 0: pid=96576: Thu Apr 18 11:13:51 2024 00:26:24.655 read: IOPS=505, BW=126MiB/s (132MB/s)(1273MiB/10073msec) 00:26:24.655 slat (usec): min=18, max=97461, avg=1961.93, stdev=7070.59 00:26:24.655 clat (msec): min=11, max=208, avg=124.54, stdev=28.78 00:26:24.655 lat (msec): min=12, max=254, avg=126.50, stdev=29.91 00:26:24.655 clat percentiles (msec): 00:26:24.655 | 1.00th=[ 60], 5.00th=[ 84], 10.00th=[ 89], 20.00th=[ 95], 00:26:24.655 | 30.00th=[ 101], 40.00th=[ 110], 50.00th=[ 134], 60.00th=[ 142], 00:26:24.655 | 70.00th=[ 146], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:26:24.655 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 199], 00:26:24.655 | 99.99th=[ 209] 00:26:24.655 bw ( KiB/s): min=102912, max=175616, per=7.01%, avg=128635.70, stdev=28400.35, samples=20 00:26:24.655 iops : min= 402, max= 686, avg=502.45, stdev=110.89, samples=20 00:26:24.655 lat (msec) : 20=0.14%, 50=0.43%, 100=29.98%, 250=69.45% 00:26:24.655 cpu : usr=0.24%, sys=1.77%, ctx=883, majf=0, minf=4097 00:26:24.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:24.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:24.655 issued rwts: total=5090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:24.655 00:26:24.655 Run status group 0 (all jobs): 00:26:24.655 READ: bw=1792MiB/s (1879MB/s), 125MiB/s-282MiB/s (131MB/s-296MB/s), io=17.7GiB (19.0GB), run=10037-10092msec 00:26:24.655 00:26:24.655 Disk stats (read/write): 00:26:24.655 nvme0n1: ios=22640/0, merge=0/0, ticks=1236114/0, in_queue=1236114, util=97.78% 00:26:24.655 nvme10n1: ios=10030/0, merge=0/0, ticks=1245530/0, in_queue=1245530, util=97.87% 00:26:24.655 nvme1n1: ios=12264/0, merge=0/0, ticks=1241709/0, in_queue=1241709, util=98.02% 00:26:24.656 nvme2n1: ios=10331/0, merge=0/0, ticks=1249009/0, in_queue=1249009, util=98.20% 00:26:24.656 nvme3n1: ios=11206/0, merge=0/0, ticks=1243026/0, in_queue=1243026, util=98.10% 00:26:24.656 nvme4n1: ios=11937/0, merge=0/0, ticks=1242340/0, in_queue=1242340, util=98.42% 00:26:24.656 nvme5n1: ios=10920/0, merge=0/0, ticks=1247203/0, in_queue=1247203, util=98.45% 00:26:24.656 nvme6n1: ios=12699/0, merge=0/0, ticks=1245291/0, in_queue=1245291, util=98.39% 00:26:24.656 nvme7n1: ios=20723/0, merge=0/0, ticks=1239842/0, in_queue=1239842, util=98.61% 00:26:24.656 nvme8n1: ios=11117/0, merge=0/0, ticks=1245521/0, in_queue=1245521, util=98.97% 00:26:24.656 nvme9n1: ios=10103/0, merge=0/0, ticks=1248456/0, in_queue=1248456, util=99.07% 00:26:24.656 11:13:51 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:24.656 [global] 00:26:24.656 thread=1 00:26:24.656 invalidate=1 00:26:24.656 rw=randwrite 00:26:24.656 time_based=1 00:26:24.656 runtime=10 00:26:24.656 ioengine=libaio 00:26:24.656 direct=1 00:26:24.656 bs=262144 00:26:24.656 iodepth=64 00:26:24.656 norandommap=1 00:26:24.656 numjobs=1 00:26:24.656 00:26:24.656 [job0] 00:26:24.656 filename=/dev/nvme0n1 00:26:24.656 [job1] 00:26:24.656 filename=/dev/nvme10n1 00:26:24.656 [job2] 00:26:24.656 filename=/dev/nvme1n1 00:26:24.656 [job3] 00:26:24.656 filename=/dev/nvme2n1 00:26:24.656 [job4] 00:26:24.656 filename=/dev/nvme3n1 00:26:24.656 [job5] 00:26:24.656 filename=/dev/nvme4n1 00:26:24.656 [job6] 00:26:24.656 filename=/dev/nvme5n1 00:26:24.656 [job7] 00:26:24.656 filename=/dev/nvme6n1 00:26:24.656 [job8] 00:26:24.656 filename=/dev/nvme7n1 00:26:24.656 [job9] 00:26:24.656 filename=/dev/nvme8n1 00:26:24.656 [job10] 00:26:24.656 filename=/dev/nvme9n1 00:26:24.656 Could not set queue depth (nvme0n1) 00:26:24.656 Could not set queue depth (nvme10n1) 00:26:24.656 Could not set queue depth (nvme1n1) 00:26:24.656 Could not set queue depth (nvme2n1) 00:26:24.656 Could not set queue depth (nvme3n1) 00:26:24.656 Could not set queue depth (nvme4n1) 00:26:24.656 Could not set queue depth (nvme5n1) 00:26:24.656 Could not set queue depth (nvme6n1) 00:26:24.656 Could not set queue depth (nvme7n1) 00:26:24.656 Could not set queue depth (nvme8n1) 00:26:24.656 Could not set queue depth (nvme9n1) 00:26:24.656 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:24.656 fio-3.35 00:26:24.656 Starting 11 threads 00:26:34.624 00:26:34.624 job0: (groupid=0, jobs=1): err= 0: pid=96776: Thu Apr 18 11:14:01 2024 00:26:34.624 write: IOPS=424, BW=106MiB/s (111MB/s)(1077MiB/10152msec); 0 zone resets 00:26:34.624 slat (usec): min=19, max=17846, avg=2307.63, stdev=3968.01 00:26:34.624 clat (msec): min=20, max=292, avg=148.42, stdev=15.90 00:26:34.624 lat (msec): min=20, max=292, avg=150.72, stdev=15.63 00:26:34.624 clat percentiles (msec): 00:26:34.624 | 1.00th=[ 93], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 142], 00:26:34.624 | 30.00th=[ 146], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 150], 00:26:34.624 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 161], 00:26:34.624 | 99.00th=[ 192], 99.50th=[ 247], 99.90th=[ 284], 99.95th=[ 284], 00:26:34.624 | 99.99th=[ 292] 00:26:34.624 bw ( KiB/s): min=94208, max=111393, per=6.50%, avg=108671.90, stdev=3692.75, samples=20 00:26:34.624 iops : min= 368, max= 435, avg=424.45, stdev=14.39, samples=20 00:26:34.624 lat (msec) : 50=0.58%, 100=0.49%, 250=98.51%, 500=0.42% 00:26:34.624 cpu : usr=1.14%, sys=1.15%, ctx=3413, majf=0, minf=1 00:26:34.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:34.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.624 issued rwts: total=0,4309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.624 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.624 job1: (groupid=0, jobs=1): err= 0: pid=96777: Thu Apr 18 11:14:01 2024 00:26:34.624 write: IOPS=819, BW=205MiB/s (215MB/s)(2063MiB/10070msec); 0 zone resets 00:26:34.624 slat (usec): min=20, max=14481, avg=1207.23, stdev=2025.69 00:26:34.624 clat (msec): min=6, max=142, avg=76.84, stdev= 4.98 00:26:34.624 lat (msec): min=6, max=142, avg=78.05, stdev= 4.65 00:26:34.624 clat percentiles (msec): 00:26:34.624 | 1.00th=[ 72], 5.00th=[ 73], 10.00th=[ 73], 20.00th=[ 74], 00:26:34.624 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 78], 60.00th=[ 79], 00:26:34.624 | 70.00th=[ 79], 80.00th=[ 79], 90.00th=[ 80], 95.00th=[ 80], 00:26:34.624 | 99.00th=[ 91], 99.50th=[ 106], 99.90th=[ 133], 99.95th=[ 138], 00:26:34.624 | 99.99th=[ 142] 00:26:34.624 bw ( KiB/s): min=193536, max=214610, per=12.54%, avg=209616.90, stdev=4109.28, samples=20 00:26:34.624 iops : min= 756, max= 838, avg=818.80, stdev=16.03, samples=20 00:26:34.624 lat (msec) : 10=0.04%, 20=0.04%, 50=0.21%, 100=99.04%, 250=0.68% 00:26:34.624 cpu : usr=1.77%, sys=2.17%, ctx=12678, majf=0, minf=1 00:26:34.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:34.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.624 issued rwts: total=0,8252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.624 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.624 job2: (groupid=0, jobs=1): err= 0: pid=96789: Thu Apr 18 11:14:01 2024 00:26:34.624 write: IOPS=420, BW=105MiB/s (110MB/s)(1061MiB/10096msec); 0 zone resets 00:26:34.624 slat (usec): min=16, max=30619, avg=2344.60, stdev=4063.78 00:26:34.624 clat (msec): min=12, max=207, avg=149.81, stdev=15.46 00:26:34.624 lat (msec): min=12, max=207, avg=152.16, stdev=15.20 00:26:34.624 clat percentiles (msec): 00:26:34.624 | 1.00th=[ 64], 5.00th=[ 138], 10.00th=[ 144], 20.00th=[ 144], 00:26:34.624 | 30.00th=[ 146], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 00:26:34.624 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 159], 00:26:34.624 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 201], 99.95th=[ 201], 00:26:34.624 | 99.99th=[ 207] 00:26:34.624 bw ( KiB/s): min=98304, max=114970, per=6.40%, avg=107032.55, stdev=3490.12, samples=20 00:26:34.624 iops : min= 384, max= 449, avg=418.05, stdev=13.65, samples=20 00:26:34.624 lat (msec) : 20=0.12%, 50=0.61%, 100=0.90%, 250=98.37% 00:26:34.624 cpu : usr=1.05%, sys=1.20%, ctx=2078, majf=0, minf=1 00:26:34.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:34.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.624 issued rwts: total=0,4244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.624 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.624 job3: (groupid=0, jobs=1): err= 0: pid=96790: Thu Apr 18 11:14:01 2024 00:26:34.624 write: IOPS=425, BW=106MiB/s (112MB/s)(1071MiB/10065msec); 0 zone resets 00:26:34.625 slat (usec): min=17, max=69578, avg=2298.33, stdev=4146.38 00:26:34.625 clat (msec): min=38, max=200, avg=147.98, stdev=20.44 00:26:34.625 lat (msec): min=39, max=200, avg=150.28, stdev=20.49 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 57], 5.00th=[ 100], 10.00th=[ 144], 20.00th=[ 144], 00:26:34.625 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 00:26:34.625 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 161], 00:26:34.625 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 201], 99.95th=[ 201], 00:26:34.625 | 99.99th=[ 201] 00:26:34.625 bw ( KiB/s): min=88064, max=163840, per=6.47%, avg=108062.25, stdev=13872.69, samples=20 00:26:34.625 iops : min= 344, max= 640, avg=422.10, stdev=54.20, samples=20 00:26:34.625 lat (msec) : 50=0.44%, 100=4.64%, 250=94.91% 00:26:34.625 cpu : usr=1.05%, sys=1.34%, ctx=6739, majf=0, minf=1 00:26:34.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.625 issued rwts: total=0,4285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.625 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.625 job4: (groupid=0, jobs=1): err= 0: pid=96791: Thu Apr 18 11:14:01 2024 00:26:34.625 write: IOPS=422, BW=106MiB/s (111MB/s)(1072MiB/10151msec); 0 zone resets 00:26:34.625 slat (usec): min=24, max=34334, avg=2328.52, stdev=4016.88 00:26:34.625 clat (msec): min=19, max=293, avg=149.17, stdev=14.54 00:26:34.625 lat (msec): min=19, max=293, avg=151.50, stdev=14.15 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 144], 00:26:34.625 | 30.00th=[ 148], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 150], 00:26:34.625 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 165], 00:26:34.625 | 99.00th=[ 194], 99.50th=[ 249], 99.90th=[ 284], 99.95th=[ 284], 00:26:34.625 | 99.99th=[ 296] 00:26:34.625 bw ( KiB/s): min=94720, max=112640, per=6.47%, avg=108096.70, stdev=3962.14, samples=20 00:26:34.625 iops : min= 370, max= 440, avg=422.20, stdev=15.55, samples=20 00:26:34.625 lat (msec) : 20=0.09%, 50=0.21%, 100=0.35%, 250=98.93%, 500=0.42% 00:26:34.625 cpu : usr=0.95%, sys=1.23%, ctx=4287, majf=0, minf=1 00:26:34.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.625 issued rwts: total=0,4286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.625 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.625 job5: (groupid=0, jobs=1): err= 0: pid=96792: Thu Apr 18 11:14:01 2024 00:26:34.625 write: IOPS=424, BW=106MiB/s (111MB/s)(1077MiB/10155msec); 0 zone resets 00:26:34.625 slat (usec): min=21, max=15793, avg=2319.17, stdev=3966.21 00:26:34.625 clat (msec): min=5, max=293, avg=148.47, stdev=16.15 00:26:34.625 lat (msec): min=5, max=293, avg=150.79, stdev=15.86 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 88], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 142], 00:26:34.625 | 30.00th=[ 146], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 150], 00:26:34.625 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 161], 00:26:34.625 | 99.00th=[ 192], 99.50th=[ 249], 99.90th=[ 284], 99.95th=[ 284], 00:26:34.625 | 99.99th=[ 296] 00:26:34.625 bw ( KiB/s): min=94208, max=110882, per=6.50%, avg=108613.35, stdev=3569.63, samples=20 00:26:34.625 iops : min= 368, max= 433, avg=424.25, stdev=13.94, samples=20 00:26:34.625 lat (msec) : 10=0.07%, 20=0.05%, 50=0.44%, 100=0.46%, 250=98.56% 00:26:34.625 lat (msec) : 500=0.42% 00:26:34.625 cpu : usr=0.82%, sys=1.15%, ctx=4923, majf=0, minf=1 00:26:34.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.625 issued rwts: total=0,4307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.625 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.625 job6: (groupid=0, jobs=1): err= 0: pid=96793: Thu Apr 18 11:14:01 2024 00:26:34.625 write: IOPS=419, BW=105MiB/s (110MB/s)(1059MiB/10097msec); 0 zone resets 00:26:34.625 slat (usec): min=15, max=31459, avg=2329.68, stdev=4070.02 00:26:34.625 clat (msec): min=11, max=210, avg=150.15, stdev=17.34 00:26:34.625 lat (msec): min=11, max=210, avg=152.48, stdev=17.15 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 48], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 00:26:34.625 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 00:26:34.625 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 159], 00:26:34.625 | 99.00th=[ 197], 99.50th=[ 199], 99.90th=[ 203], 99.95th=[ 203], 00:26:34.625 | 99.99th=[ 211] 00:26:34.625 bw ( KiB/s): min=98304, max=116502, per=6.39%, avg=106795.15, stdev=3662.93, samples=20 00:26:34.625 iops : min= 384, max= 455, avg=417.15, stdev=14.29, samples=20 00:26:34.625 lat (msec) : 20=0.21%, 50=0.83%, 100=0.85%, 250=98.11% 00:26:34.625 cpu : usr=1.11%, sys=1.19%, ctx=4029, majf=0, minf=1 00:26:34.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.625 issued rwts: total=0,4236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.625 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.625 job7: (groupid=0, jobs=1): err= 0: pid=96794: Thu Apr 18 11:14:01 2024 00:26:34.625 write: IOPS=827, BW=207MiB/s (217MB/s)(2083MiB/10068msec); 0 zone resets 00:26:34.625 slat (usec): min=18, max=7797, avg=1190.62, stdev=2007.95 00:26:34.625 clat (msec): min=5, max=144, avg=76.11, stdev= 7.36 00:26:34.625 lat (msec): min=5, max=144, avg=77.30, stdev= 7.22 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 37], 5.00th=[ 73], 10.00th=[ 73], 20.00th=[ 74], 00:26:34.625 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 78], 60.00th=[ 79], 00:26:34.625 | 70.00th=[ 79], 80.00th=[ 79], 90.00th=[ 80], 95.00th=[ 80], 00:26:34.625 | 99.00th=[ 82], 99.50th=[ 96], 99.90th=[ 136], 99.95th=[ 140], 00:26:34.625 | 99.99th=[ 144] 00:26:34.625 bw ( KiB/s): min=205312, max=237568, per=12.67%, avg=211690.70, stdev=6327.09, samples=20 00:26:34.625 iops : min= 802, max= 928, avg=826.90, stdev=24.71, samples=20 00:26:34.625 lat (msec) : 10=0.12%, 20=0.31%, 50=1.06%, 100=98.06%, 250=0.46% 00:26:34.625 cpu : usr=1.85%, sys=2.09%, ctx=8127, majf=0, minf=1 00:26:34.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.625 issued rwts: total=0,8333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.625 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.625 job8: (groupid=0, jobs=1): err= 0: pid=96795: Thu Apr 18 11:14:01 2024 00:26:34.625 write: IOPS=1539, BW=385MiB/s (403MB/s)(3875MiB/10071msec); 0 zone resets 00:26:34.625 slat (usec): min=17, max=14886, avg=629.23, stdev=1051.23 00:26:34.625 clat (usec): min=956, max=218546, avg=40930.79, stdev=9892.79 00:26:34.625 lat (usec): min=995, max=219899, avg=41560.02, stdev=9907.70 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:26:34.625 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:26:34.625 | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 43], 95.00th=[ 43], 00:26:34.625 | 99.00th=[ 74], 99.50th=[ 112], 99.90th=[ 184], 99.95th=[ 203], 00:26:34.625 | 99.99th=[ 218] 00:26:34.625 bw ( KiB/s): min=324471, max=406829, per=23.65%, avg=395261.30, stdev=17196.16, samples=20 00:26:34.625 iops : min= 1267, max= 1589, avg=1543.95, stdev=67.26, samples=20 00:26:34.625 lat (usec) : 1000=0.01% 00:26:34.625 lat (msec) : 2=0.28%, 4=0.30%, 10=0.23%, 20=0.39%, 50=96.92% 00:26:34.625 lat (msec) : 100=1.28%, 250=0.59% 00:26:34.625 cpu : usr=2.62%, sys=3.52%, ctx=19946, majf=0, minf=1 00:26:34.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.625 issued rwts: total=0,15501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.625 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.625 job9: (groupid=0, jobs=1): err= 0: pid=96796: Thu Apr 18 11:14:01 2024 00:26:34.625 write: IOPS=423, BW=106MiB/s (111MB/s)(1074MiB/10147msec); 0 zone resets 00:26:34.625 slat (usec): min=16, max=31041, avg=2325.37, stdev=4001.88 00:26:34.625 clat (msec): min=33, max=288, avg=148.82, stdev=14.16 00:26:34.625 lat (msec): min=33, max=288, avg=151.14, stdev=13.77 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 124], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 142], 00:26:34.625 | 30.00th=[ 146], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 150], 00:26:34.625 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 161], 00:26:34.625 | 99.00th=[ 188], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 279], 00:26:34.625 | 99.99th=[ 288] 00:26:34.625 bw ( KiB/s): min=94208, max=112415, per=6.48%, avg=108316.85, stdev=3738.65, samples=20 00:26:34.625 iops : min= 368, max= 439, avg=423.10, stdev=14.59, samples=20 00:26:34.625 lat (msec) : 50=0.26%, 100=0.49%, 250=98.84%, 500=0.42% 00:26:34.625 cpu : usr=0.93%, sys=0.98%, ctx=4864, majf=0, minf=1 00:26:34.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:34.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.625 issued rwts: total=0,4295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.625 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.625 job10: (groupid=0, jobs=1): err= 0: pid=96797: Thu Apr 18 11:14:01 2024 00:26:34.625 write: IOPS=420, BW=105MiB/s (110MB/s)(1061MiB/10104msec); 0 zone resets 00:26:34.625 slat (usec): min=24, max=48012, avg=2351.06, stdev=4104.62 00:26:34.625 clat (msec): min=5, max=217, avg=149.79, stdev=15.87 00:26:34.625 lat (msec): min=5, max=217, avg=152.14, stdev=15.61 00:26:34.625 clat percentiles (msec): 00:26:34.625 | 1.00th=[ 72], 5.00th=[ 138], 10.00th=[ 144], 20.00th=[ 144], 00:26:34.625 | 30.00th=[ 146], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 00:26:34.625 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 161], 00:26:34.625 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 207], 99.95th=[ 209], 00:26:34.625 | 99.99th=[ 218] 00:26:34.625 bw ( KiB/s): min=100352, max=114970, per=6.41%, avg=107049.45, stdev=2848.33, samples=20 00:26:34.626 iops : min= 392, max= 449, avg=417.90, stdev=11.14, samples=20 00:26:34.626 lat (msec) : 10=0.24%, 20=0.38%, 100=0.85%, 250=98.54% 00:26:34.626 cpu : usr=1.14%, sys=1.26%, ctx=4946, majf=0, minf=1 00:26:34.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:34.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.626 issued rwts: total=0,4244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.626 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.626 00:26:34.626 Run status group 0 (all jobs): 00:26:34.626 WRITE: bw=1632MiB/s (1711MB/s), 105MiB/s-385MiB/s (110MB/s-403MB/s), io=16.2GiB (17.4GB), run=10065-10155msec 00:26:34.626 00:26:34.626 Disk stats (read/write): 00:26:34.626 nvme0n1: ios=50/8481, merge=0/0, ticks=112/1213134, in_queue=1213246, util=97.91% 00:26:34.626 nvme10n1: ios=49/16366, merge=0/0, ticks=49/1216990, in_queue=1217039, util=97.91% 00:26:34.626 nvme1n1: ios=28/8349, merge=0/0, ticks=31/1214357, in_queue=1214388, util=97.86% 00:26:34.626 nvme2n1: ios=0/8403, merge=0/0, ticks=0/1216639, in_queue=1216639, util=97.79% 00:26:34.626 nvme3n1: ios=0/8438, merge=0/0, ticks=0/1213275, in_queue=1213275, util=97.97% 00:26:34.626 nvme4n1: ios=0/8476, merge=0/0, ticks=0/1212891, in_queue=1212891, util=98.22% 00:26:34.626 nvme5n1: ios=0/8339, merge=0/0, ticks=0/1215570, in_queue=1215570, util=98.30% 00:26:34.626 nvme6n1: ios=0/16534, merge=0/0, ticks=0/1216390, in_queue=1216390, util=98.44% 00:26:34.626 nvme7n1: ios=0/30855, merge=0/0, ticks=0/1216750, in_queue=1216750, util=98.64% 00:26:34.626 nvme8n1: ios=0/8441, merge=0/0, ticks=0/1211775, in_queue=1211775, util=98.65% 00:26:34.626 nvme9n1: ios=0/8366, merge=0/0, ticks=0/1214340, in_queue=1214340, util=99.02% 00:26:34.626 11:14:01 -- target/multiconnection.sh@36 -- # sync 00:26:34.626 11:14:01 -- target/multiconnection.sh@37 -- # seq 1 11 00:26:34.626 11:14:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.626 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:34.626 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.626 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.626 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.626 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.626 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:34.626 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:34.626 11:14:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:34.626 11:14:02 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.626 11:14:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:26:34.626 11:14:02 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.627 11:14:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:34.627 11:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.627 11:14:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.627 11:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.627 11:14:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.627 11:14:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:34.627 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:34.627 11:14:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:34.627 11:14:03 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.627 11:14:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.627 11:14:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:26:34.627 11:14:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:26:34.627 11:14:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.627 11:14:03 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.627 11:14:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:34.627 11:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.627 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:26:34.627 11:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.627 11:14:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.627 11:14:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:34.627 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:34.627 11:14:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:34.627 11:14:03 -- common/autotest_common.sh@1205 -- # local i=0 00:26:34.627 11:14:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:34.627 11:14:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:26:34.627 11:14:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:34.627 11:14:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:26:34.627 11:14:03 -- common/autotest_common.sh@1217 -- # return 0 00:26:34.627 11:14:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:34.627 11:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.627 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:26:34.627 11:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.627 11:14:03 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:34.627 11:14:03 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:34.627 11:14:03 -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:34.627 11:14:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:34.627 11:14:03 -- nvmf/common.sh@117 -- # sync 00:26:34.627 11:14:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.627 11:14:03 -- nvmf/common.sh@120 -- # set +e 00:26:34.627 11:14:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.627 11:14:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.627 rmmod nvme_tcp 00:26:34.885 rmmod nvme_fabrics 00:26:34.885 rmmod nvme_keyring 00:26:34.885 11:14:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.885 11:14:03 -- nvmf/common.sh@124 -- # set -e 00:26:34.885 11:14:03 -- nvmf/common.sh@125 -- # return 0 00:26:34.885 11:14:03 -- nvmf/common.sh@478 -- # '[' -n 96094 ']' 00:26:34.885 11:14:03 -- nvmf/common.sh@479 -- # killprocess 96094 00:26:34.885 11:14:03 -- common/autotest_common.sh@936 -- # '[' -z 96094 ']' 00:26:34.885 11:14:03 -- common/autotest_common.sh@940 -- # kill -0 96094 00:26:34.885 11:14:03 -- common/autotest_common.sh@941 -- # uname 00:26:34.885 11:14:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:34.885 11:14:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96094 00:26:34.885 11:14:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:34.885 killing process with pid 96094 00:26:34.885 11:14:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:34.885 11:14:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96094' 00:26:34.885 11:14:03 -- common/autotest_common.sh@955 -- # kill 96094 00:26:34.885 11:14:03 -- common/autotest_common.sh@960 -- # wait 96094 00:26:35.193 11:14:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:35.193 11:14:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:35.193 11:14:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:35.193 11:14:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.193 11:14:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.193 11:14:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.193 11:14:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.193 11:14:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.452 11:14:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:35.452 ************************************ 00:26:35.452 END TEST nvmf_multiconnection 00:26:35.452 ************************************ 00:26:35.452 00:26:35.452 real 0m49.628s 00:26:35.452 user 2m47.487s 00:26:35.452 sys 0m24.597s 00:26:35.452 11:14:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:35.452 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:26:35.452 11:14:03 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:35.452 11:14:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:35.452 11:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:35.452 11:14:03 -- common/autotest_common.sh@10 -- # set +x 00:26:35.452 ************************************ 00:26:35.452 START TEST nvmf_initiator_timeout 00:26:35.452 ************************************ 00:26:35.452 11:14:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:35.452 * Looking for test storage... 00:26:35.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:35.452 11:14:04 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:35.452 11:14:04 -- nvmf/common.sh@7 -- # uname -s 00:26:35.452 11:14:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.452 11:14:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.452 11:14:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.452 11:14:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.452 11:14:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.452 11:14:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.452 11:14:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.452 11:14:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.452 11:14:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.452 11:14:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.452 11:14:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:26:35.452 11:14:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:26:35.452 11:14:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.452 11:14:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.452 11:14:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:35.452 11:14:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.452 11:14:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:35.452 11:14:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.452 11:14:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.452 11:14:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.452 11:14:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.452 11:14:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.452 11:14:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.452 11:14:04 -- paths/export.sh@5 -- # export PATH 00:26:35.452 11:14:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.452 11:14:04 -- nvmf/common.sh@47 -- # : 0 00:26:35.452 11:14:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:35.452 11:14:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:35.452 11:14:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.452 11:14:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.452 11:14:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.452 11:14:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:35.452 11:14:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:35.452 11:14:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:35.452 11:14:04 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:35.452 11:14:04 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:35.452 11:14:04 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:35.452 11:14:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:35.452 11:14:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.452 11:14:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:35.452 11:14:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:35.452 11:14:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:35.452 11:14:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.452 11:14:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.452 11:14:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.452 11:14:04 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:35.452 11:14:04 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:35.452 11:14:04 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:35.452 11:14:04 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:35.452 11:14:04 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:35.452 11:14:04 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:35.452 11:14:04 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.452 11:14:04 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.452 11:14:04 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:35.452 11:14:04 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:35.452 11:14:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:35.452 11:14:04 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:35.452 11:14:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:35.452 11:14:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.452 11:14:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:35.452 11:14:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:35.453 11:14:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:35.453 11:14:04 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:35.453 11:14:04 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:35.453 11:14:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:35.711 Cannot find device "nvmf_tgt_br" 00:26:35.711 11:14:04 -- nvmf/common.sh@155 -- # true 00:26:35.711 11:14:04 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:35.711 Cannot find device "nvmf_tgt_br2" 00:26:35.711 11:14:04 -- nvmf/common.sh@156 -- # true 00:26:35.711 11:14:04 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:35.711 11:14:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:35.711 Cannot find device "nvmf_tgt_br" 00:26:35.711 11:14:04 -- nvmf/common.sh@158 -- # true 00:26:35.711 11:14:04 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:35.711 Cannot find device "nvmf_tgt_br2" 00:26:35.711 11:14:04 -- nvmf/common.sh@159 -- # true 00:26:35.711 11:14:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:35.711 11:14:04 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:35.711 11:14:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:35.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:35.711 11:14:04 -- nvmf/common.sh@162 -- # true 00:26:35.711 11:14:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:35.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:35.711 11:14:04 -- nvmf/common.sh@163 -- # true 00:26:35.711 11:14:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:35.711 11:14:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:35.711 11:14:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:35.711 11:14:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:35.711 11:14:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:35.711 11:14:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:35.711 11:14:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:35.711 11:14:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:35.711 11:14:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:35.711 11:14:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:35.711 11:14:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:35.711 11:14:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:35.711 11:14:04 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:35.711 11:14:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:35.711 11:14:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:35.711 11:14:04 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:35.711 11:14:04 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:35.711 11:14:04 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:35.711 11:14:04 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:35.711 11:14:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:35.969 11:14:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:35.969 11:14:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:35.969 11:14:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:35.969 11:14:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:35.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:26:35.969 00:26:35.969 --- 10.0.0.2 ping statistics --- 00:26:35.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.969 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:35.969 11:14:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:35.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:35.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:26:35.969 00:26:35.969 --- 10.0.0.3 ping statistics --- 00:26:35.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.969 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:35.969 11:14:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:35.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:26:35.969 00:26:35.969 --- 10.0.0.1 ping statistics --- 00:26:35.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.969 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:26:35.969 11:14:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.969 11:14:04 -- nvmf/common.sh@422 -- # return 0 00:26:35.969 11:14:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:35.969 11:14:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.969 11:14:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:35.969 11:14:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:35.969 11:14:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.969 11:14:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:35.969 11:14:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:35.969 11:14:04 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:35.969 11:14:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:35.969 11:14:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:35.970 11:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:35.970 11:14:04 -- nvmf/common.sh@470 -- # nvmfpid=97168 00:26:35.970 11:14:04 -- nvmf/common.sh@471 -- # waitforlisten 97168 00:26:35.970 11:14:04 -- common/autotest_common.sh@817 -- # '[' -z 97168 ']' 00:26:35.970 11:14:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.970 11:14:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:35.970 11:14:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:35.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.970 11:14:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.970 11:14:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:35.970 11:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:35.970 [2024-04-18 11:14:04.478758] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:35.970 [2024-04-18 11:14:04.478868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.228 [2024-04-18 11:14:04.623429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.228 [2024-04-18 11:14:04.724189] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.228 [2024-04-18 11:14:04.724531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.228 [2024-04-18 11:14:04.724860] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.228 [2024-04-18 11:14:04.725149] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.228 [2024-04-18 11:14:04.725363] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.228 [2024-04-18 11:14:04.725676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.228 [2024-04-18 11:14:04.725790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.228 [2024-04-18 11:14:04.725839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.228 [2024-04-18 11:14:04.725844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.161 11:14:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:37.161 11:14:05 -- common/autotest_common.sh@850 -- # return 0 00:26:37.161 11:14:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:37.161 11:14:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:37.161 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.161 11:14:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:37.161 11:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.161 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.161 Malloc0 00:26:37.161 11:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:37.161 11:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.161 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.161 Delay0 00:26:37.161 11:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:37.161 11:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.161 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.161 [2024-04-18 11:14:05.543839] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.161 11:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:37.161 11:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.161 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.161 11:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:37.161 11:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.161 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.161 11:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.161 11:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:37.161 11:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.161 [2024-04-18 11:14:05.572104] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.161 11:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:37.161 11:14:05 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:37.161 11:14:05 -- common/autotest_common.sh@1184 -- # local i=0 00:26:37.161 11:14:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:26:37.161 11:14:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:26:37.161 11:14:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:26:39.692 11:14:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:26:39.692 11:14:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:26:39.692 11:14:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:26:39.692 11:14:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:26:39.692 11:14:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.692 11:14:07 -- common/autotest_common.sh@1194 -- # return 0 00:26:39.692 11:14:07 -- target/initiator_timeout.sh@35 -- # fio_pid=97256 00:26:39.692 11:14:07 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:39.692 11:14:07 -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:39.692 [global] 00:26:39.692 thread=1 00:26:39.692 invalidate=1 00:26:39.692 rw=write 00:26:39.692 time_based=1 00:26:39.692 runtime=60 00:26:39.692 ioengine=libaio 00:26:39.692 direct=1 00:26:39.692 bs=4096 00:26:39.692 iodepth=1 00:26:39.692 norandommap=0 00:26:39.692 numjobs=1 00:26:39.692 00:26:39.692 verify_dump=1 00:26:39.692 verify_backlog=512 00:26:39.692 verify_state_save=0 00:26:39.692 do_verify=1 00:26:39.692 verify=crc32c-intel 00:26:39.692 [job0] 00:26:39.692 filename=/dev/nvme0n1 00:26:39.692 Could not set queue depth (nvme0n1) 00:26:39.692 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:39.692 fio-3.35 00:26:39.692 Starting 1 thread 00:26:42.226 11:14:10 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:42.226 11:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.226 11:14:10 -- common/autotest_common.sh@10 -- # set +x 00:26:42.226 true 00:26:42.226 11:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.226 11:14:10 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:42.226 11:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.226 11:14:10 -- common/autotest_common.sh@10 -- # set +x 00:26:42.226 true 00:26:42.226 11:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.226 11:14:10 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:42.226 11:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.226 11:14:10 -- common/autotest_common.sh@10 -- # set +x 00:26:42.226 true 00:26:42.226 11:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.226 11:14:10 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:42.226 11:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.226 11:14:10 -- common/autotest_common.sh@10 -- # set +x 00:26:42.226 true 00:26:42.226 11:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.226 11:14:10 -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:45.531 11:14:13 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:45.531 11:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.531 11:14:13 -- common/autotest_common.sh@10 -- # set +x 00:26:45.531 true 00:26:45.531 11:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.531 11:14:13 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:45.531 11:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.531 11:14:13 -- common/autotest_common.sh@10 -- # set +x 00:26:45.531 true 00:26:45.531 11:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.531 11:14:13 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:45.531 11:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.531 11:14:13 -- common/autotest_common.sh@10 -- # set +x 00:26:45.531 true 00:26:45.531 11:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.531 11:14:13 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:45.531 11:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.531 11:14:13 -- common/autotest_common.sh@10 -- # set +x 00:26:45.531 true 00:26:45.531 11:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.531 11:14:13 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:45.531 11:14:13 -- target/initiator_timeout.sh@54 -- # wait 97256 00:27:41.740 00:27:41.740 job0: (groupid=0, jobs=1): err= 0: pid=97277: Thu Apr 18 11:15:08 2024 00:27:41.740 read: IOPS=836, BW=3345KiB/s (3425kB/s)(196MiB/60000msec) 00:27:41.740 slat (usec): min=12, max=11754, avg=16.01, stdev=68.27 00:27:41.740 clat (usec): min=165, max=40533k, avg=1001.91, stdev=180951.23 00:27:41.740 lat (usec): min=178, max=40533k, avg=1017.92, stdev=180951.23 00:27:41.740 clat percentiles (usec): 00:27:41.740 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:27:41.740 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:27:41.740 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:27:41.740 | 99.00th=[ 241], 99.50th=[ 269], 99.90th=[ 644], 99.95th=[ 668], 00:27:41.740 | 99.99th=[ 848] 00:27:41.740 write: IOPS=840, BW=3361KiB/s (3442kB/s)(197MiB/60000msec); 0 zone resets 00:27:41.740 slat (usec): min=19, max=617, avg=22.96, stdev= 5.35 00:27:41.740 clat (usec): min=100, max=1504, avg=150.47, stdev=14.87 00:27:41.740 lat (usec): min=148, max=1540, avg=173.43, stdev=16.32 00:27:41.740 clat percentiles (usec): 00:27:41.740 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:27:41.740 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:27:41.740 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 00:27:41.740 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 229], 99.95th=[ 293], 00:27:41.740 | 99.99th=[ 701] 00:27:41.740 bw ( KiB/s): min= 4063, max=12288, per=100.00%, avg=10096.67, stdev=1737.96, samples=39 00:27:41.740 iops : min= 1015, max= 3072, avg=2524.18, stdev=434.40, samples=39 00:27:41.740 lat (usec) : 250=99.62%, 500=0.22%, 750=0.14%, 1000=0.01% 00:27:41.740 lat (msec) : 2=0.01%, >=2000=0.01% 00:27:41.740 cpu : usr=0.65%, sys=2.41%, ctx=100598, majf=0, minf=2 00:27:41.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.740 issued rwts: total=50176,50413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:41.740 00:27:41.740 Run status group 0 (all jobs): 00:27:41.740 READ: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=196MiB (206MB), run=60000-60000msec 00:27:41.740 WRITE: bw=3361KiB/s (3442kB/s), 3361KiB/s-3361KiB/s (3442kB/s-3442kB/s), io=197MiB (206MB), run=60000-60000msec 00:27:41.740 00:27:41.740 Disk stats (read/write): 00:27:41.740 nvme0n1: ios=50247/50176, merge=0/0, ticks=10164/8132, in_queue=18296, util=99.72% 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:41.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:41.740 11:15:08 -- common/autotest_common.sh@1205 -- # local i=0 00:27:41.740 11:15:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.740 11:15:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:27:41.740 11:15:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.740 11:15:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:27:41.740 nvmf hotplug test: fio successful as expected 00:27:41.740 11:15:08 -- common/autotest_common.sh@1217 -- # return 0 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.740 11:15:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.740 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:27:41.740 11:15:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:41.740 11:15:08 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:41.740 11:15:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:41.740 11:15:08 -- nvmf/common.sh@117 -- # sync 00:27:41.740 11:15:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.740 11:15:08 -- nvmf/common.sh@120 -- # set +e 00:27:41.740 11:15:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.740 11:15:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.740 rmmod nvme_tcp 00:27:41.740 rmmod nvme_fabrics 00:27:41.740 rmmod nvme_keyring 00:27:41.740 11:15:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.740 11:15:08 -- nvmf/common.sh@124 -- # set -e 00:27:41.740 11:15:08 -- nvmf/common.sh@125 -- # return 0 00:27:41.740 11:15:08 -- nvmf/common.sh@478 -- # '[' -n 97168 ']' 00:27:41.740 11:15:08 -- nvmf/common.sh@479 -- # killprocess 97168 00:27:41.740 11:15:08 -- common/autotest_common.sh@936 -- # '[' -z 97168 ']' 00:27:41.740 11:15:08 -- common/autotest_common.sh@940 -- # kill -0 97168 00:27:41.740 11:15:08 -- common/autotest_common.sh@941 -- # uname 00:27:41.740 11:15:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:41.740 11:15:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97168 00:27:41.740 killing process with pid 97168 00:27:41.740 11:15:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:41.740 11:15:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:41.740 11:15:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97168' 00:27:41.740 11:15:08 -- common/autotest_common.sh@955 -- # kill 97168 00:27:41.740 11:15:08 -- common/autotest_common.sh@960 -- # wait 97168 00:27:41.740 11:15:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:41.740 11:15:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:41.740 11:15:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:41.740 11:15:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.740 11:15:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.740 11:15:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.740 11:15:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.740 11:15:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.740 11:15:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:41.740 00:27:41.740 real 1m4.566s 00:27:41.740 user 4m5.231s 00:27:41.740 sys 0m9.880s 00:27:41.740 11:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:41.740 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:27:41.740 ************************************ 00:27:41.740 END TEST nvmf_initiator_timeout 00:27:41.740 ************************************ 00:27:41.740 11:15:08 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:27:41.740 11:15:08 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:27:41.740 11:15:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:41.740 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:27:41.740 11:15:08 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:27:41.740 11:15:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:41.740 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:27:41.740 11:15:08 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:27:41.740 11:15:08 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:41.740 11:15:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:41.740 11:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:41.740 11:15:08 -- common/autotest_common.sh@10 -- # set +x 00:27:41.740 ************************************ 00:27:41.740 START TEST nvmf_multicontroller 00:27:41.740 ************************************ 00:27:41.740 11:15:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:41.740 * Looking for test storage... 00:27:41.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:41.740 11:15:08 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:41.740 11:15:08 -- nvmf/common.sh@7 -- # uname -s 00:27:41.740 11:15:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.740 11:15:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.740 11:15:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.740 11:15:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.740 11:15:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.740 11:15:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.740 11:15:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.740 11:15:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.740 11:15:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.740 11:15:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.740 11:15:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:41.740 11:15:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:41.740 11:15:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.740 11:15:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.740 11:15:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:41.740 11:15:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.740 11:15:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:41.740 11:15:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.740 11:15:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.740 11:15:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.740 11:15:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.740 11:15:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.740 11:15:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.740 11:15:08 -- paths/export.sh@5 -- # export PATH 00:27:41.740 11:15:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.740 11:15:08 -- nvmf/common.sh@47 -- # : 0 00:27:41.741 11:15:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.741 11:15:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.741 11:15:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.741 11:15:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.741 11:15:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.741 11:15:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.741 11:15:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.741 11:15:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.741 11:15:08 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.741 11:15:08 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.741 11:15:08 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:41.741 11:15:08 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:41.741 11:15:08 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:41.741 11:15:08 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:41.741 11:15:08 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:41.741 11:15:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:41.741 11:15:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.741 11:15:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:41.741 11:15:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:41.741 11:15:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:41.741 11:15:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.741 11:15:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.741 11:15:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.741 11:15:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:41.741 11:15:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:41.741 11:15:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:41.741 11:15:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:41.741 11:15:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:41.741 11:15:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:41.741 11:15:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.741 11:15:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.741 11:15:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:41.741 11:15:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:41.741 11:15:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:41.741 11:15:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:41.741 11:15:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:41.741 11:15:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.741 11:15:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:41.741 11:15:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:41.741 11:15:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:41.741 11:15:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:41.741 11:15:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:41.741 11:15:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:41.741 Cannot find device "nvmf_tgt_br" 00:27:41.741 11:15:08 -- nvmf/common.sh@155 -- # true 00:27:41.741 11:15:08 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:41.741 Cannot find device "nvmf_tgt_br2" 00:27:41.741 11:15:08 -- nvmf/common.sh@156 -- # true 00:27:41.741 11:15:08 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:41.741 11:15:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:41.741 Cannot find device "nvmf_tgt_br" 00:27:41.741 11:15:08 -- nvmf/common.sh@158 -- # true 00:27:41.741 11:15:08 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:41.741 Cannot find device "nvmf_tgt_br2" 00:27:41.741 11:15:08 -- nvmf/common.sh@159 -- # true 00:27:41.741 11:15:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:41.741 11:15:08 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:41.741 11:15:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:41.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.741 11:15:08 -- nvmf/common.sh@162 -- # true 00:27:41.741 11:15:08 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:41.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.741 11:15:08 -- nvmf/common.sh@163 -- # true 00:27:41.741 11:15:08 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:41.741 11:15:08 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:41.741 11:15:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:41.741 11:15:08 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:41.741 11:15:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:41.741 11:15:08 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:41.741 11:15:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:41.741 11:15:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:41.741 11:15:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:41.741 11:15:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:41.741 11:15:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:41.741 11:15:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:41.741 11:15:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:41.741 11:15:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:41.741 11:15:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:41.741 11:15:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:41.741 11:15:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:41.741 11:15:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:41.741 11:15:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:41.741 11:15:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:41.741 11:15:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:41.741 11:15:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:41.741 11:15:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:41.741 11:15:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:41.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:27:41.741 00:27:41.741 --- 10.0.0.2 ping statistics --- 00:27:41.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.741 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:27:41.741 11:15:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:41.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:41.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:27:41.741 00:27:41.741 --- 10.0.0.3 ping statistics --- 00:27:41.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.741 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:27:41.741 11:15:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:41.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:27:41.741 00:27:41.741 --- 10.0.0.1 ping statistics --- 00:27:41.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.741 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:41.741 11:15:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.741 11:15:09 -- nvmf/common.sh@422 -- # return 0 00:27:41.741 11:15:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:41.741 11:15:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.741 11:15:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:41.741 11:15:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:41.741 11:15:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.741 11:15:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:41.741 11:15:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:41.741 11:15:09 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:41.741 11:15:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:41.741 11:15:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:41.741 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:27:41.741 11:15:09 -- nvmf/common.sh@470 -- # nvmfpid=98105 00:27:41.741 11:15:09 -- nvmf/common.sh@471 -- # waitforlisten 98105 00:27:41.741 11:15:09 -- common/autotest_common.sh@817 -- # '[' -z 98105 ']' 00:27:41.741 11:15:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:41.741 11:15:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.741 11:15:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:41.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.741 11:15:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.741 11:15:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:41.741 11:15:09 -- common/autotest_common.sh@10 -- # set +x 00:27:41.741 [2024-04-18 11:15:09.216359] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:41.741 [2024-04-18 11:15:09.216465] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.741 [2024-04-18 11:15:09.362212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:41.741 [2024-04-18 11:15:09.462931] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.741 [2024-04-18 11:15:09.463229] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.741 [2024-04-18 11:15:09.463394] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.741 [2024-04-18 11:15:09.463644] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.741 [2024-04-18 11:15:09.463769] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.741 [2024-04-18 11:15:09.463895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.741 [2024-04-18 11:15:09.464337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.741 [2024-04-18 11:15:09.464401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.741 11:15:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:41.741 11:15:10 -- common/autotest_common.sh@850 -- # return 0 00:27:41.741 11:15:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:41.741 11:15:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:41.741 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:41.741 11:15:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.741 11:15:10 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.741 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.741 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:41.741 [2024-04-18 11:15:10.332459] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.741 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.741 11:15:10 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:41.741 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.741 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:41.741 Malloc0 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 [2024-04-18 11:15:10.402827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 [2024-04-18 11:15:10.410745] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 Malloc1 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:42.000 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.000 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.000 11:15:10 -- host/multicontroller.sh@44 -- # bdevperf_pid=98157 00:27:42.000 11:15:10 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:42.000 11:15:10 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.000 11:15:10 -- host/multicontroller.sh@47 -- # waitforlisten 98157 /var/tmp/bdevperf.sock 00:27:42.000 11:15:10 -- common/autotest_common.sh@817 -- # '[' -z 98157 ']' 00:27:42.000 11:15:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.000 11:15:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:42.000 11:15:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.000 11:15:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:42.000 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.257 11:15:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:42.257 11:15:10 -- common/autotest_common.sh@850 -- # return 0 00:27:42.257 11:15:10 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:42.257 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.257 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.515 NVMe0n1 00:27:42.515 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.515 11:15:10 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:42.515 11:15:10 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:42.515 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.515 11:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.515 1 00:27:42.515 11:15:10 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:42.515 11:15:10 -- common/autotest_common.sh@638 -- # local es=0 00:27:42.515 11:15:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:42.515 11:15:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:42.515 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.515 2024/04/18 11:15:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:42.515 request: 00:27:42.515 { 00:27:42.515 "method": "bdev_nvme_attach_controller", 00:27:42.515 "params": { 00:27:42.515 "name": "NVMe0", 00:27:42.515 "trtype": "tcp", 00:27:42.515 "traddr": "10.0.0.2", 00:27:42.515 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:42.515 "hostaddr": "10.0.0.2", 00:27:42.515 "hostsvcid": "60000", 00:27:42.515 "adrfam": "ipv4", 00:27:42.515 "trsvcid": "4420", 00:27:42.515 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:27:42.515 } 00:27:42.515 } 00:27:42.515 Got JSON-RPC error response 00:27:42.515 GoRPCClient: error on JSON-RPC call 00:27:42.515 11:15:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:42.515 11:15:10 -- common/autotest_common.sh@641 -- # es=1 00:27:42.515 11:15:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:42.515 11:15:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:42.515 11:15:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:42.515 11:15:10 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:42.515 11:15:10 -- common/autotest_common.sh@638 -- # local es=0 00:27:42.515 11:15:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:42.515 11:15:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:42.515 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.515 2024/04/18 11:15:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:42.515 request: 00:27:42.515 { 00:27:42.515 "method": "bdev_nvme_attach_controller", 00:27:42.515 "params": { 00:27:42.515 "name": "NVMe0", 00:27:42.515 "trtype": "tcp", 00:27:42.515 "traddr": "10.0.0.2", 00:27:42.515 "hostaddr": "10.0.0.2", 00:27:42.515 "hostsvcid": "60000", 00:27:42.515 "adrfam": "ipv4", 00:27:42.515 "trsvcid": "4420", 00:27:42.515 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:27:42.515 } 00:27:42.515 } 00:27:42.515 Got JSON-RPC error response 00:27:42.515 GoRPCClient: error on JSON-RPC call 00:27:42.515 11:15:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:42.515 11:15:10 -- common/autotest_common.sh@641 -- # es=1 00:27:42.515 11:15:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:42.515 11:15:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:42.515 11:15:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:42.515 11:15:10 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@638 -- # local es=0 00:27:42.515 11:15:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.515 2024/04/18 11:15:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:27:42.515 request: 00:27:42.515 { 00:27:42.515 "method": "bdev_nvme_attach_controller", 00:27:42.515 "params": { 00:27:42.515 "name": "NVMe0", 00:27:42.515 "trtype": "tcp", 00:27:42.515 "traddr": "10.0.0.2", 00:27:42.515 "hostaddr": "10.0.0.2", 00:27:42.515 "hostsvcid": "60000", 00:27:42.515 "adrfam": "ipv4", 00:27:42.515 "trsvcid": "4420", 00:27:42.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.515 "multipath": "disable" 00:27:42.515 } 00:27:42.515 } 00:27:42.515 Got JSON-RPC error response 00:27:42.515 GoRPCClient: error on JSON-RPC call 00:27:42.515 11:15:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:42.515 11:15:10 -- common/autotest_common.sh@641 -- # es=1 00:27:42.515 11:15:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:42.515 11:15:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:42.515 11:15:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:42.515 11:15:10 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:42.515 11:15:10 -- common/autotest_common.sh@638 -- # local es=0 00:27:42.515 11:15:10 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:42.515 11:15:10 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:42.515 11:15:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:42.515 11:15:10 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:42.515 11:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.515 11:15:10 -- common/autotest_common.sh@10 -- # set +x 00:27:42.515 2024/04/18 11:15:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:42.515 request: 00:27:42.515 { 00:27:42.515 "method": "bdev_nvme_attach_controller", 00:27:42.515 "params": { 00:27:42.515 "name": "NVMe0", 00:27:42.516 "trtype": "tcp", 00:27:42.516 "traddr": "10.0.0.2", 00:27:42.516 "hostaddr": "10.0.0.2", 00:27:42.516 "hostsvcid": "60000", 00:27:42.516 "adrfam": "ipv4", 00:27:42.516 "trsvcid": "4420", 00:27:42.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.516 "multipath": "failover" 00:27:42.516 } 00:27:42.516 } 00:27:42.516 Got JSON-RPC error response 00:27:42.516 GoRPCClient: error on JSON-RPC call 00:27:42.516 11:15:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:42.516 11:15:11 -- common/autotest_common.sh@641 -- # es=1 00:27:42.516 11:15:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:42.516 11:15:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:42.516 11:15:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:42.516 11:15:11 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:42.516 11:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.516 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:27:42.516 00:27:42.516 11:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.516 11:15:11 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:42.516 11:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.516 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:27:42.516 11:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.516 11:15:11 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:42.516 11:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.516 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:27:42.773 00:27:42.773 11:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.773 11:15:11 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:42.773 11:15:11 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:42.773 11:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.773 11:15:11 -- common/autotest_common.sh@10 -- # set +x 00:27:42.773 11:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.773 11:15:11 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:42.773 11:15:11 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:43.704 0 00:27:43.704 11:15:12 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:43.704 11:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.704 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:27:43.704 11:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.704 11:15:12 -- host/multicontroller.sh@100 -- # killprocess 98157 00:27:43.704 11:15:12 -- common/autotest_common.sh@936 -- # '[' -z 98157 ']' 00:27:43.704 11:15:12 -- common/autotest_common.sh@940 -- # kill -0 98157 00:27:43.704 11:15:12 -- common/autotest_common.sh@941 -- # uname 00:27:43.704 11:15:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:43.704 11:15:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98157 00:27:43.962 killing process with pid 98157 00:27:43.962 11:15:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:43.962 11:15:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:43.962 11:15:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98157' 00:27:43.962 11:15:12 -- common/autotest_common.sh@955 -- # kill 98157 00:27:43.962 11:15:12 -- common/autotest_common.sh@960 -- # wait 98157 00:27:43.962 11:15:12 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:43.962 11:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.962 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:27:43.962 11:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.962 11:15:12 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:43.962 11:15:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.962 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:27:43.962 11:15:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.962 11:15:12 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:43.962 11:15:12 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:43.962 11:15:12 -- common/autotest_common.sh@1598 -- # read -r file 00:27:43.962 11:15:12 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:27:43.962 11:15:12 -- common/autotest_common.sh@1597 -- # sort -u 00:27:43.962 11:15:12 -- common/autotest_common.sh@1599 -- # cat 00:27:43.962 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:27:43.962 [2024-04-18 11:15:10.522997] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:43.962 [2024-04-18 11:15:10.523128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98157 ] 00:27:43.962 [2024-04-18 11:15:10.656449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.962 [2024-04-18 11:15:10.741986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.962 [2024-04-18 11:15:11.158844] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 6ad15432-550d-413b-a2dc-edaf04701a5c already exists 00:27:43.962 [2024-04-18 11:15:11.158926] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:6ad15432-550d-413b-a2dc-edaf04701a5c alias for bdev NVMe1n1 00:27:43.962 [2024-04-18 11:15:11.158949] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:43.962 Running I/O for 1 seconds... 00:27:43.962 00:27:43.962 Latency(us) 00:27:43.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.962 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:43.962 NVMe0n1 : 1.01 18931.21 73.95 0.00 0.00 6745.47 2115.03 12094.37 00:27:43.962 =================================================================================================================== 00:27:43.962 Total : 18931.21 73.95 0.00 0.00 6745.47 2115.03 12094.37 00:27:43.962 Received shutdown signal, test time was about 1.000000 seconds 00:27:43.962 00:27:43.962 Latency(us) 00:27:43.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.962 =================================================================================================================== 00:27:43.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.962 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:27:43.962 11:15:12 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:43.962 11:15:12 -- common/autotest_common.sh@1598 -- # read -r file 00:27:43.962 11:15:12 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:43.962 11:15:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:43.962 11:15:12 -- nvmf/common.sh@117 -- # sync 00:27:44.220 11:15:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.220 11:15:12 -- nvmf/common.sh@120 -- # set +e 00:27:44.220 11:15:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.220 11:15:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.220 rmmod nvme_tcp 00:27:44.220 rmmod nvme_fabrics 00:27:44.220 rmmod nvme_keyring 00:27:44.220 11:15:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.220 11:15:12 -- nvmf/common.sh@124 -- # set -e 00:27:44.220 11:15:12 -- nvmf/common.sh@125 -- # return 0 00:27:44.220 11:15:12 -- nvmf/common.sh@478 -- # '[' -n 98105 ']' 00:27:44.220 11:15:12 -- nvmf/common.sh@479 -- # killprocess 98105 00:27:44.220 11:15:12 -- common/autotest_common.sh@936 -- # '[' -z 98105 ']' 00:27:44.220 11:15:12 -- common/autotest_common.sh@940 -- # kill -0 98105 00:27:44.220 11:15:12 -- common/autotest_common.sh@941 -- # uname 00:27:44.220 11:15:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:44.220 11:15:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98105 00:27:44.220 killing process with pid 98105 00:27:44.220 11:15:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:44.220 11:15:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:44.220 11:15:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98105' 00:27:44.220 11:15:12 -- common/autotest_common.sh@955 -- # kill 98105 00:27:44.220 11:15:12 -- common/autotest_common.sh@960 -- # wait 98105 00:27:44.478 11:15:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:44.478 11:15:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:44.478 11:15:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:44.478 11:15:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.478 11:15:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.478 11:15:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.478 11:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.478 11:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.478 11:15:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:44.478 00:27:44.478 real 0m4.312s 00:27:44.478 user 0m12.878s 00:27:44.478 sys 0m1.016s 00:27:44.478 ************************************ 00:27:44.478 END TEST nvmf_multicontroller 00:27:44.478 ************************************ 00:27:44.478 11:15:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:44.478 11:15:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.478 11:15:13 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:44.478 11:15:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:44.478 11:15:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:44.478 11:15:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.478 ************************************ 00:27:44.478 START TEST nvmf_aer 00:27:44.478 ************************************ 00:27:44.478 11:15:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:44.735 * Looking for test storage... 00:27:44.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:44.735 11:15:13 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:44.735 11:15:13 -- nvmf/common.sh@7 -- # uname -s 00:27:44.735 11:15:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.735 11:15:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.735 11:15:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.735 11:15:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.735 11:15:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.735 11:15:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.735 11:15:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.735 11:15:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.735 11:15:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.735 11:15:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.735 11:15:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:44.735 11:15:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:44.735 11:15:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.735 11:15:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.735 11:15:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:44.735 11:15:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.735 11:15:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:44.735 11:15:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.735 11:15:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.735 11:15:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.735 11:15:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.735 11:15:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.735 11:15:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.735 11:15:13 -- paths/export.sh@5 -- # export PATH 00:27:44.735 11:15:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.735 11:15:13 -- nvmf/common.sh@47 -- # : 0 00:27:44.735 11:15:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.735 11:15:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.735 11:15:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.735 11:15:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.735 11:15:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.735 11:15:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.735 11:15:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.735 11:15:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.735 11:15:13 -- host/aer.sh@11 -- # nvmftestinit 00:27:44.735 11:15:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:44.735 11:15:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.735 11:15:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:44.735 11:15:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:44.735 11:15:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:44.735 11:15:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.735 11:15:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.735 11:15:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.735 11:15:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:44.735 11:15:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:44.735 11:15:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:44.735 11:15:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:44.735 11:15:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:44.735 11:15:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:44.735 11:15:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.735 11:15:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.735 11:15:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:44.735 11:15:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:44.735 11:15:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:44.735 11:15:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:44.735 11:15:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:44.735 11:15:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.735 11:15:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:44.735 11:15:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:44.735 11:15:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:44.735 11:15:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:44.735 11:15:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:44.735 11:15:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:44.735 Cannot find device "nvmf_tgt_br" 00:27:44.735 11:15:13 -- nvmf/common.sh@155 -- # true 00:27:44.735 11:15:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:44.735 Cannot find device "nvmf_tgt_br2" 00:27:44.735 11:15:13 -- nvmf/common.sh@156 -- # true 00:27:44.735 11:15:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:44.735 11:15:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:44.735 Cannot find device "nvmf_tgt_br" 00:27:44.735 11:15:13 -- nvmf/common.sh@158 -- # true 00:27:44.735 11:15:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:44.735 Cannot find device "nvmf_tgt_br2" 00:27:44.735 11:15:13 -- nvmf/common.sh@159 -- # true 00:27:44.735 11:15:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:44.735 11:15:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:44.735 11:15:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:44.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:44.735 11:15:13 -- nvmf/common.sh@162 -- # true 00:27:44.735 11:15:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:44.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:44.735 11:15:13 -- nvmf/common.sh@163 -- # true 00:27:44.735 11:15:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:44.735 11:15:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:44.991 11:15:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:44.991 11:15:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:44.991 11:15:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:44.991 11:15:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:44.991 11:15:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:44.991 11:15:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:44.992 11:15:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:44.992 11:15:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:44.992 11:15:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:44.992 11:15:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:44.992 11:15:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:44.992 11:15:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:44.992 11:15:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:44.992 11:15:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:44.992 11:15:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:44.992 11:15:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:44.992 11:15:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:44.992 11:15:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:44.992 11:15:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:44.992 11:15:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:44.992 11:15:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:44.992 11:15:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:44.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:27:44.992 00:27:44.992 --- 10.0.0.2 ping statistics --- 00:27:44.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.992 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:44.992 11:15:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:44.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:44.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:27:44.992 00:27:44.992 --- 10.0.0.3 ping statistics --- 00:27:44.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.992 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:44.992 11:15:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:44.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:44.992 00:27:44.992 --- 10.0.0.1 ping statistics --- 00:27:44.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.992 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:44.992 11:15:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.992 11:15:13 -- nvmf/common.sh@422 -- # return 0 00:27:44.992 11:15:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:44.992 11:15:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.992 11:15:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:44.992 11:15:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:44.992 11:15:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.992 11:15:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:44.992 11:15:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:44.992 11:15:13 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:44.992 11:15:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:44.992 11:15:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:44.992 11:15:13 -- common/autotest_common.sh@10 -- # set +x 00:27:44.992 11:15:13 -- nvmf/common.sh@470 -- # nvmfpid=98398 00:27:44.992 11:15:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:44.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.992 11:15:13 -- nvmf/common.sh@471 -- # waitforlisten 98398 00:27:44.992 11:15:13 -- common/autotest_common.sh@817 -- # '[' -z 98398 ']' 00:27:44.992 11:15:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.992 11:15:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:44.992 11:15:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.992 11:15:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:44.992 11:15:13 -- common/autotest_common.sh@10 -- # set +x 00:27:45.250 [2024-04-18 11:15:13.641219] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:45.250 [2024-04-18 11:15:13.641547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.250 [2024-04-18 11:15:13.785561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.250 [2024-04-18 11:15:13.881289] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.250 [2024-04-18 11:15:13.881552] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.250 [2024-04-18 11:15:13.881721] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.250 [2024-04-18 11:15:13.881870] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.250 [2024-04-18 11:15:13.881915] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.250 [2024-04-18 11:15:13.882216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.250 [2024-04-18 11:15:13.882351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.250 [2024-04-18 11:15:13.882431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.250 [2024-04-18 11:15:13.882431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.186 11:15:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:46.186 11:15:14 -- common/autotest_common.sh@850 -- # return 0 00:27:46.186 11:15:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:46.186 11:15:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:46.186 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.186 11:15:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.186 11:15:14 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.186 11:15:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.186 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.186 [2024-04-18 11:15:14.771288] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.186 11:15:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.186 11:15:14 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:46.186 11:15:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.186 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.186 Malloc0 00:27:46.186 11:15:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.186 11:15:14 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:46.186 11:15:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.186 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.444 11:15:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.444 11:15:14 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.444 11:15:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.444 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.444 11:15:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.444 11:15:14 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.444 11:15:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.444 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.444 [2024-04-18 11:15:14.847176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.444 11:15:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.444 11:15:14 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:46.444 11:15:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.444 11:15:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.444 [2024-04-18 11:15:14.854933] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:46.444 [ 00:27:46.444 { 00:27:46.444 "allow_any_host": true, 00:27:46.444 "hosts": [], 00:27:46.444 "listen_addresses": [], 00:27:46.444 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:46.444 "subtype": "Discovery" 00:27:46.444 }, 00:27:46.444 { 00:27:46.444 "allow_any_host": true, 00:27:46.444 "hosts": [], 00:27:46.444 "listen_addresses": [ 00:27:46.444 { 00:27:46.444 "adrfam": "IPv4", 00:27:46.444 "traddr": "10.0.0.2", 00:27:46.444 "transport": "TCP", 00:27:46.444 "trsvcid": "4420", 00:27:46.444 "trtype": "TCP" 00:27:46.444 } 00:27:46.444 ], 00:27:46.444 "max_cntlid": 65519, 00:27:46.444 "max_namespaces": 2, 00:27:46.444 "min_cntlid": 1, 00:27:46.444 "model_number": "SPDK bdev Controller", 00:27:46.444 "namespaces": [ 00:27:46.444 { 00:27:46.444 "bdev_name": "Malloc0", 00:27:46.444 "name": "Malloc0", 00:27:46.444 "nguid": "6FB4ADE0402242589EEDB5486BEA1837", 00:27:46.444 "nsid": 1, 00:27:46.444 "uuid": "6fb4ade0-4022-4258-9eed-b5486bea1837" 00:27:46.444 } 00:27:46.444 ], 00:27:46.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.444 "serial_number": "SPDK00000000000001", 00:27:46.444 "subtype": "NVMe" 00:27:46.444 } 00:27:46.444 ] 00:27:46.444 11:15:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.444 11:15:14 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:46.444 11:15:14 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:46.444 11:15:14 -- host/aer.sh@33 -- # aerpid=98458 00:27:46.444 11:15:14 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:46.444 11:15:14 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:46.444 11:15:14 -- common/autotest_common.sh@1251 -- # local i=0 00:27:46.444 11:15:14 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.444 11:15:14 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:27:46.444 11:15:14 -- common/autotest_common.sh@1254 -- # i=1 00:27:46.444 11:15:14 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:27:46.444 11:15:14 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.444 11:15:14 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:27:46.444 11:15:14 -- common/autotest_common.sh@1254 -- # i=2 00:27:46.444 11:15:14 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:27:46.444 11:15:15 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.703 11:15:15 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:46.703 11:15:15 -- common/autotest_common.sh@1262 -- # return 0 00:27:46.703 11:15:15 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:46.703 11:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.703 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:46.703 Malloc1 00:27:46.703 11:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.703 11:15:15 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:46.703 11:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.703 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:46.703 11:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.703 11:15:15 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:46.703 11:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.703 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:46.703 Asynchronous Event Request test 00:27:46.703 Attaching to 10.0.0.2 00:27:46.703 Attached to 10.0.0.2 00:27:46.703 Registering asynchronous event callbacks... 00:27:46.703 Starting namespace attribute notice tests for all controllers... 00:27:46.703 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:46.703 aer_cb - Changed Namespace 00:27:46.703 Cleaning up... 00:27:46.703 [ 00:27:46.703 { 00:27:46.703 "allow_any_host": true, 00:27:46.703 "hosts": [], 00:27:46.703 "listen_addresses": [], 00:27:46.703 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:46.703 "subtype": "Discovery" 00:27:46.703 }, 00:27:46.703 { 00:27:46.703 "allow_any_host": true, 00:27:46.703 "hosts": [], 00:27:46.703 "listen_addresses": [ 00:27:46.703 { 00:27:46.703 "adrfam": "IPv4", 00:27:46.703 "traddr": "10.0.0.2", 00:27:46.703 "transport": "TCP", 00:27:46.703 "trsvcid": "4420", 00:27:46.703 "trtype": "TCP" 00:27:46.703 } 00:27:46.703 ], 00:27:46.703 "max_cntlid": 65519, 00:27:46.703 "max_namespaces": 2, 00:27:46.703 "min_cntlid": 1, 00:27:46.703 "model_number": "SPDK bdev Controller", 00:27:46.703 "namespaces": [ 00:27:46.703 { 00:27:46.703 "bdev_name": "Malloc0", 00:27:46.703 "name": "Malloc0", 00:27:46.703 "nguid": "6FB4ADE0402242589EEDB5486BEA1837", 00:27:46.703 "nsid": 1, 00:27:46.703 "uuid": "6fb4ade0-4022-4258-9eed-b5486bea1837" 00:27:46.703 }, 00:27:46.703 { 00:27:46.703 "bdev_name": "Malloc1", 00:27:46.703 "name": "Malloc1", 00:27:46.703 "nguid": "21CB99F75B03483B9CA7A1AF7E04F03F", 00:27:46.703 "nsid": 2, 00:27:46.703 "uuid": "21cb99f7-5b03-483b-9ca7-a1af7e04f03f" 00:27:46.703 } 00:27:46.703 ], 00:27:46.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.703 "serial_number": "SPDK00000000000001", 00:27:46.703 "subtype": "NVMe" 00:27:46.703 } 00:27:46.703 ] 00:27:46.703 11:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.703 11:15:15 -- host/aer.sh@43 -- # wait 98458 00:27:46.703 11:15:15 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:46.703 11:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.703 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:46.703 11:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.703 11:15:15 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:46.703 11:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.703 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:46.703 11:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.703 11:15:15 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:46.703 11:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.703 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:46.703 11:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.703 11:15:15 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:46.703 11:15:15 -- host/aer.sh@51 -- # nvmftestfini 00:27:46.703 11:15:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:46.703 11:15:15 -- nvmf/common.sh@117 -- # sync 00:27:46.703 11:15:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.703 11:15:15 -- nvmf/common.sh@120 -- # set +e 00:27:46.703 11:15:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.703 11:15:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.703 rmmod nvme_tcp 00:27:46.703 rmmod nvme_fabrics 00:27:46.703 rmmod nvme_keyring 00:27:46.703 11:15:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.703 11:15:15 -- nvmf/common.sh@124 -- # set -e 00:27:46.703 11:15:15 -- nvmf/common.sh@125 -- # return 0 00:27:46.703 11:15:15 -- nvmf/common.sh@478 -- # '[' -n 98398 ']' 00:27:46.703 11:15:15 -- nvmf/common.sh@479 -- # killprocess 98398 00:27:46.703 11:15:15 -- common/autotest_common.sh@936 -- # '[' -z 98398 ']' 00:27:46.703 11:15:15 -- common/autotest_common.sh@940 -- # kill -0 98398 00:27:46.703 11:15:15 -- common/autotest_common.sh@941 -- # uname 00:27:46.962 11:15:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:46.962 11:15:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98398 00:27:46.962 killing process with pid 98398 00:27:46.962 11:15:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:46.962 11:15:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:46.962 11:15:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98398' 00:27:46.962 11:15:15 -- common/autotest_common.sh@955 -- # kill 98398 00:27:46.962 [2024-04-18 11:15:15.362568] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:46.962 11:15:15 -- common/autotest_common.sh@960 -- # wait 98398 00:27:46.962 11:15:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:46.962 11:15:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:46.962 11:15:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:46.962 11:15:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.962 11:15:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.962 11:15:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.962 11:15:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.962 11:15:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.221 11:15:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:47.221 00:27:47.221 real 0m2.497s 00:27:47.221 user 0m7.050s 00:27:47.221 sys 0m0.673s 00:27:47.221 11:15:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:47.221 ************************************ 00:27:47.221 END TEST nvmf_aer 00:27:47.221 ************************************ 00:27:47.221 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:47.221 11:15:15 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:47.221 11:15:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:47.221 11:15:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:47.221 11:15:15 -- common/autotest_common.sh@10 -- # set +x 00:27:47.221 ************************************ 00:27:47.221 START TEST nvmf_async_init 00:27:47.221 ************************************ 00:27:47.221 11:15:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:47.221 * Looking for test storage... 00:27:47.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:47.221 11:15:15 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:47.221 11:15:15 -- nvmf/common.sh@7 -- # uname -s 00:27:47.221 11:15:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.221 11:15:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.221 11:15:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.221 11:15:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.221 11:15:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.221 11:15:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.221 11:15:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.221 11:15:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.221 11:15:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.221 11:15:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.221 11:15:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:47.221 11:15:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:47.221 11:15:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.221 11:15:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.221 11:15:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:47.221 11:15:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.221 11:15:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:47.221 11:15:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.221 11:15:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.221 11:15:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.221 11:15:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.221 11:15:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.221 11:15:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.221 11:15:15 -- paths/export.sh@5 -- # export PATH 00:27:47.221 11:15:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.221 11:15:15 -- nvmf/common.sh@47 -- # : 0 00:27:47.221 11:15:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.221 11:15:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.221 11:15:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.221 11:15:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.221 11:15:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.221 11:15:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.221 11:15:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.221 11:15:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.221 11:15:15 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:47.221 11:15:15 -- host/async_init.sh@14 -- # null_block_size=512 00:27:47.221 11:15:15 -- host/async_init.sh@15 -- # null_bdev=null0 00:27:47.221 11:15:15 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:47.221 11:15:15 -- host/async_init.sh@20 -- # uuidgen 00:27:47.221 11:15:15 -- host/async_init.sh@20 -- # tr -d - 00:27:47.221 11:15:15 -- host/async_init.sh@20 -- # nguid=c548130fd84c4a3e91ccf0e3c2f3441b 00:27:47.221 11:15:15 -- host/async_init.sh@22 -- # nvmftestinit 00:27:47.221 11:15:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:47.221 11:15:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.221 11:15:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:47.221 11:15:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:47.221 11:15:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:47.221 11:15:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.221 11:15:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.221 11:15:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.221 11:15:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:47.221 11:15:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:47.221 11:15:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:47.221 11:15:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:47.221 11:15:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:47.221 11:15:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:47.221 11:15:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.221 11:15:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.221 11:15:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:47.221 11:15:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:47.221 11:15:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:47.221 11:15:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:47.221 11:15:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:47.221 11:15:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.221 11:15:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:47.221 11:15:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:47.221 11:15:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:47.221 11:15:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:47.221 11:15:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:47.480 11:15:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:47.480 Cannot find device "nvmf_tgt_br" 00:27:47.480 11:15:15 -- nvmf/common.sh@155 -- # true 00:27:47.480 11:15:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:47.480 Cannot find device "nvmf_tgt_br2" 00:27:47.480 11:15:15 -- nvmf/common.sh@156 -- # true 00:27:47.480 11:15:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:47.480 11:15:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:47.480 Cannot find device "nvmf_tgt_br" 00:27:47.480 11:15:15 -- nvmf/common.sh@158 -- # true 00:27:47.480 11:15:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:47.480 Cannot find device "nvmf_tgt_br2" 00:27:47.480 11:15:15 -- nvmf/common.sh@159 -- # true 00:27:47.480 11:15:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:47.480 11:15:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:47.480 11:15:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:47.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:47.480 11:15:15 -- nvmf/common.sh@162 -- # true 00:27:47.480 11:15:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:47.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:47.480 11:15:15 -- nvmf/common.sh@163 -- # true 00:27:47.480 11:15:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:47.480 11:15:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:47.480 11:15:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:47.480 11:15:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:47.480 11:15:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:47.480 11:15:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:47.480 11:15:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:47.480 11:15:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:47.480 11:15:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:47.480 11:15:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:47.480 11:15:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:47.480 11:15:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:47.480 11:15:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:47.480 11:15:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:47.480 11:15:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:47.480 11:15:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:47.480 11:15:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:47.480 11:15:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:47.480 11:15:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:47.480 11:15:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:47.738 11:15:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:47.738 11:15:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:47.738 11:15:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:47.738 11:15:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:47.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:27:47.738 00:27:47.738 --- 10.0.0.2 ping statistics --- 00:27:47.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.738 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:27:47.738 11:15:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:47.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:47.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:27:47.738 00:27:47.738 --- 10.0.0.3 ping statistics --- 00:27:47.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.738 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:47.738 11:15:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:47.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:27:47.738 00:27:47.738 --- 10.0.0.1 ping statistics --- 00:27:47.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.738 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:27:47.738 11:15:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.738 11:15:16 -- nvmf/common.sh@422 -- # return 0 00:27:47.738 11:15:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:47.738 11:15:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.738 11:15:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:47.738 11:15:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:47.738 11:15:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.738 11:15:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:47.738 11:15:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:47.738 11:15:16 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:47.738 11:15:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:47.738 11:15:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:47.738 11:15:16 -- common/autotest_common.sh@10 -- # set +x 00:27:47.738 11:15:16 -- nvmf/common.sh@470 -- # nvmfpid=98632 00:27:47.738 11:15:16 -- nvmf/common.sh@471 -- # waitforlisten 98632 00:27:47.738 11:15:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:47.738 11:15:16 -- common/autotest_common.sh@817 -- # '[' -z 98632 ']' 00:27:47.739 11:15:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.739 11:15:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:47.739 11:15:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.739 11:15:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:47.739 11:15:16 -- common/autotest_common.sh@10 -- # set +x 00:27:47.739 [2024-04-18 11:15:16.232272] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:47.739 [2024-04-18 11:15:16.232360] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.739 [2024-04-18 11:15:16.368221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.997 [2024-04-18 11:15:16.458781] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.997 [2024-04-18 11:15:16.458836] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.997 [2024-04-18 11:15:16.458855] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.997 [2024-04-18 11:15:16.458870] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.997 [2024-04-18 11:15:16.458881] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.997 [2024-04-18 11:15:16.458926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.932 11:15:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:48.932 11:15:17 -- common/autotest_common.sh@850 -- # return 0 00:27:48.932 11:15:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:48.932 11:15:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.932 11:15:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.932 11:15:17 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:48.932 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.932 [2024-04-18 11:15:17.308289] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.932 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.932 11:15:17 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:48.932 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.932 null0 00:27:48.932 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.932 11:15:17 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:48.932 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.932 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.932 11:15:17 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:48.932 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.932 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.932 11:15:17 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c548130fd84c4a3e91ccf0e3c2f3441b 00:27:48.932 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.932 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.932 11:15:17 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:48.932 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.932 [2024-04-18 11:15:17.348378] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.932 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.932 11:15:17 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:48.932 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.932 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.191 nvme0n1 00:27:49.191 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.191 11:15:17 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:49.191 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.191 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.191 [ 00:27:49.191 { 00:27:49.191 "aliases": [ 00:27:49.191 "c548130f-d84c-4a3e-91cc-f0e3c2f3441b" 00:27:49.191 ], 00:27:49.191 "assigned_rate_limits": { 00:27:49.191 "r_mbytes_per_sec": 0, 00:27:49.191 "rw_ios_per_sec": 0, 00:27:49.191 "rw_mbytes_per_sec": 0, 00:27:49.191 "w_mbytes_per_sec": 0 00:27:49.191 }, 00:27:49.191 "block_size": 512, 00:27:49.191 "claimed": false, 00:27:49.191 "driver_specific": { 00:27:49.191 "mp_policy": "active_passive", 00:27:49.191 "nvme": [ 00:27:49.191 { 00:27:49.191 "ctrlr_data": { 00:27:49.191 "ana_reporting": false, 00:27:49.191 "cntlid": 1, 00:27:49.191 "firmware_revision": "24.05", 00:27:49.191 "model_number": "SPDK bdev Controller", 00:27:49.191 "multi_ctrlr": true, 00:27:49.191 "oacs": { 00:27:49.191 "firmware": 0, 00:27:49.191 "format": 0, 00:27:49.191 "ns_manage": 0, 00:27:49.191 "security": 0 00:27:49.191 }, 00:27:49.191 "serial_number": "00000000000000000000", 00:27:49.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.191 "vendor_id": "0x8086" 00:27:49.191 }, 00:27:49.191 "ns_data": { 00:27:49.191 "can_share": true, 00:27:49.191 "id": 1 00:27:49.191 }, 00:27:49.191 "trid": { 00:27:49.191 "adrfam": "IPv4", 00:27:49.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.191 "traddr": "10.0.0.2", 00:27:49.191 "trsvcid": "4420", 00:27:49.191 "trtype": "TCP" 00:27:49.191 }, 00:27:49.191 "vs": { 00:27:49.191 "nvme_version": "1.3" 00:27:49.191 } 00:27:49.191 } 00:27:49.191 ] 00:27:49.191 }, 00:27:49.191 "memory_domains": [ 00:27:49.191 { 00:27:49.191 "dma_device_id": "system", 00:27:49.191 "dma_device_type": 1 00:27:49.191 } 00:27:49.191 ], 00:27:49.191 "name": "nvme0n1", 00:27:49.191 "num_blocks": 2097152, 00:27:49.191 "product_name": "NVMe disk", 00:27:49.191 "supported_io_types": { 00:27:49.191 "abort": true, 00:27:49.191 "compare": true, 00:27:49.191 "compare_and_write": true, 00:27:49.191 "flush": true, 00:27:49.191 "nvme_admin": true, 00:27:49.191 "nvme_io": true, 00:27:49.191 "read": true, 00:27:49.191 "reset": true, 00:27:49.191 "unmap": false, 00:27:49.191 "write": true, 00:27:49.191 "write_zeroes": true 00:27:49.191 }, 00:27:49.191 "uuid": "c548130f-d84c-4a3e-91cc-f0e3c2f3441b", 00:27:49.191 "zoned": false 00:27:49.191 } 00:27:49.191 ] 00:27:49.191 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.191 11:15:17 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:49.191 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.191 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.191 [2024-04-18 11:15:17.616518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:49.191 [2024-04-18 11:15:17.616618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77e6e0 (9): Bad file descriptor 00:27:49.191 [2024-04-18 11:15:17.748258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:49.191 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.191 11:15:17 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:49.191 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.191 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.191 [ 00:27:49.191 { 00:27:49.191 "aliases": [ 00:27:49.191 "c548130f-d84c-4a3e-91cc-f0e3c2f3441b" 00:27:49.191 ], 00:27:49.191 "assigned_rate_limits": { 00:27:49.191 "r_mbytes_per_sec": 0, 00:27:49.191 "rw_ios_per_sec": 0, 00:27:49.191 "rw_mbytes_per_sec": 0, 00:27:49.191 "w_mbytes_per_sec": 0 00:27:49.191 }, 00:27:49.191 "block_size": 512, 00:27:49.191 "claimed": false, 00:27:49.191 "driver_specific": { 00:27:49.191 "mp_policy": "active_passive", 00:27:49.191 "nvme": [ 00:27:49.191 { 00:27:49.191 "ctrlr_data": { 00:27:49.191 "ana_reporting": false, 00:27:49.191 "cntlid": 2, 00:27:49.191 "firmware_revision": "24.05", 00:27:49.191 "model_number": "SPDK bdev Controller", 00:27:49.191 "multi_ctrlr": true, 00:27:49.191 "oacs": { 00:27:49.191 "firmware": 0, 00:27:49.191 "format": 0, 00:27:49.191 "ns_manage": 0, 00:27:49.191 "security": 0 00:27:49.191 }, 00:27:49.191 "serial_number": "00000000000000000000", 00:27:49.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.191 "vendor_id": "0x8086" 00:27:49.192 }, 00:27:49.192 "ns_data": { 00:27:49.192 "can_share": true, 00:27:49.192 "id": 1 00:27:49.192 }, 00:27:49.192 "trid": { 00:27:49.192 "adrfam": "IPv4", 00:27:49.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.192 "traddr": "10.0.0.2", 00:27:49.192 "trsvcid": "4420", 00:27:49.192 "trtype": "TCP" 00:27:49.192 }, 00:27:49.192 "vs": { 00:27:49.192 "nvme_version": "1.3" 00:27:49.192 } 00:27:49.192 } 00:27:49.192 ] 00:27:49.192 }, 00:27:49.192 "memory_domains": [ 00:27:49.192 { 00:27:49.192 "dma_device_id": "system", 00:27:49.192 "dma_device_type": 1 00:27:49.192 } 00:27:49.192 ], 00:27:49.192 "name": "nvme0n1", 00:27:49.192 "num_blocks": 2097152, 00:27:49.192 "product_name": "NVMe disk", 00:27:49.192 "supported_io_types": { 00:27:49.192 "abort": true, 00:27:49.192 "compare": true, 00:27:49.192 "compare_and_write": true, 00:27:49.192 "flush": true, 00:27:49.192 "nvme_admin": true, 00:27:49.192 "nvme_io": true, 00:27:49.192 "read": true, 00:27:49.192 "reset": true, 00:27:49.192 "unmap": false, 00:27:49.192 "write": true, 00:27:49.192 "write_zeroes": true 00:27:49.192 }, 00:27:49.192 "uuid": "c548130f-d84c-4a3e-91cc-f0e3c2f3441b", 00:27:49.192 "zoned": false 00:27:49.192 } 00:27:49.192 ] 00:27:49.192 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.192 11:15:17 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.192 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.192 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.192 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.192 11:15:17 -- host/async_init.sh@53 -- # mktemp 00:27:49.192 11:15:17 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.M0wZu8Mx84 00:27:49.192 11:15:17 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:49.192 11:15:17 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.M0wZu8Mx84 00:27:49.192 11:15:17 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:49.192 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.192 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.192 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.192 11:15:17 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:49.192 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.192 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.192 [2024-04-18 11:15:17.820666] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:49.192 [2024-04-18 11:15:17.820837] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:49.192 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.192 11:15:17 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M0wZu8Mx84 00:27:49.192 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.192 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.192 [2024-04-18 11:15:17.828668] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:49.450 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.450 11:15:17 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M0wZu8Mx84 00:27:49.450 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.450 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.450 [2024-04-18 11:15:17.836658] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:49.450 [2024-04-18 11:15:17.836724] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:49.450 nvme0n1 00:27:49.450 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.450 11:15:17 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:49.450 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.450 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.450 [ 00:27:49.450 { 00:27:49.450 "aliases": [ 00:27:49.450 "c548130f-d84c-4a3e-91cc-f0e3c2f3441b" 00:27:49.450 ], 00:27:49.450 "assigned_rate_limits": { 00:27:49.450 "r_mbytes_per_sec": 0, 00:27:49.450 "rw_ios_per_sec": 0, 00:27:49.450 "rw_mbytes_per_sec": 0, 00:27:49.450 "w_mbytes_per_sec": 0 00:27:49.450 }, 00:27:49.450 "block_size": 512, 00:27:49.450 "claimed": false, 00:27:49.450 "driver_specific": { 00:27:49.450 "mp_policy": "active_passive", 00:27:49.450 "nvme": [ 00:27:49.450 { 00:27:49.450 "ctrlr_data": { 00:27:49.450 "ana_reporting": false, 00:27:49.450 "cntlid": 3, 00:27:49.450 "firmware_revision": "24.05", 00:27:49.450 "model_number": "SPDK bdev Controller", 00:27:49.450 "multi_ctrlr": true, 00:27:49.450 "oacs": { 00:27:49.450 "firmware": 0, 00:27:49.450 "format": 0, 00:27:49.450 "ns_manage": 0, 00:27:49.450 "security": 0 00:27:49.450 }, 00:27:49.450 "serial_number": "00000000000000000000", 00:27:49.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.450 "vendor_id": "0x8086" 00:27:49.450 }, 00:27:49.450 "ns_data": { 00:27:49.450 "can_share": true, 00:27:49.450 "id": 1 00:27:49.450 }, 00:27:49.450 "trid": { 00:27:49.450 "adrfam": "IPv4", 00:27:49.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.450 "traddr": "10.0.0.2", 00:27:49.450 "trsvcid": "4421", 00:27:49.450 "trtype": "TCP" 00:27:49.450 }, 00:27:49.451 "vs": { 00:27:49.451 "nvme_version": "1.3" 00:27:49.451 } 00:27:49.451 } 00:27:49.451 ] 00:27:49.451 }, 00:27:49.451 "memory_domains": [ 00:27:49.451 { 00:27:49.451 "dma_device_id": "system", 00:27:49.451 "dma_device_type": 1 00:27:49.451 } 00:27:49.451 ], 00:27:49.451 "name": "nvme0n1", 00:27:49.451 "num_blocks": 2097152, 00:27:49.451 "product_name": "NVMe disk", 00:27:49.451 "supported_io_types": { 00:27:49.451 "abort": true, 00:27:49.451 "compare": true, 00:27:49.451 "compare_and_write": true, 00:27:49.451 "flush": true, 00:27:49.451 "nvme_admin": true, 00:27:49.451 "nvme_io": true, 00:27:49.451 "read": true, 00:27:49.451 "reset": true, 00:27:49.451 "unmap": false, 00:27:49.451 "write": true, 00:27:49.451 "write_zeroes": true 00:27:49.451 }, 00:27:49.451 "uuid": "c548130f-d84c-4a3e-91cc-f0e3c2f3441b", 00:27:49.451 "zoned": false 00:27:49.451 } 00:27:49.451 ] 00:27:49.451 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.451 11:15:17 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.451 11:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.451 11:15:17 -- common/autotest_common.sh@10 -- # set +x 00:27:49.451 11:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.451 11:15:17 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.M0wZu8Mx84 00:27:49.451 11:15:17 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:49.451 11:15:17 -- host/async_init.sh@78 -- # nvmftestfini 00:27:49.451 11:15:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:49.451 11:15:17 -- nvmf/common.sh@117 -- # sync 00:27:49.451 11:15:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.451 11:15:17 -- nvmf/common.sh@120 -- # set +e 00:27:49.451 11:15:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.451 11:15:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.451 rmmod nvme_tcp 00:27:49.451 rmmod nvme_fabrics 00:27:49.451 rmmod nvme_keyring 00:27:49.451 11:15:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.451 11:15:18 -- nvmf/common.sh@124 -- # set -e 00:27:49.451 11:15:18 -- nvmf/common.sh@125 -- # return 0 00:27:49.451 11:15:18 -- nvmf/common.sh@478 -- # '[' -n 98632 ']' 00:27:49.451 11:15:18 -- nvmf/common.sh@479 -- # killprocess 98632 00:27:49.451 11:15:18 -- common/autotest_common.sh@936 -- # '[' -z 98632 ']' 00:27:49.451 11:15:18 -- common/autotest_common.sh@940 -- # kill -0 98632 00:27:49.451 11:15:18 -- common/autotest_common.sh@941 -- # uname 00:27:49.451 11:15:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:49.451 11:15:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98632 00:27:49.451 11:15:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:49.451 11:15:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:49.451 killing process with pid 98632 00:27:49.451 11:15:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98632' 00:27:49.451 11:15:18 -- common/autotest_common.sh@955 -- # kill 98632 00:27:49.451 [2024-04-18 11:15:18.074823] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:49.451 [2024-04-18 11:15:18.074859] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:49.451 11:15:18 -- common/autotest_common.sh@960 -- # wait 98632 00:27:49.709 11:15:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:49.709 11:15:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:49.709 11:15:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:49.709 11:15:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.709 11:15:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.709 11:15:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.709 11:15:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.709 11:15:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.709 11:15:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:49.709 00:27:49.709 real 0m2.582s 00:27:49.709 user 0m2.463s 00:27:49.709 sys 0m0.608s 00:27:49.709 11:15:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:49.709 ************************************ 00:27:49.709 END TEST nvmf_async_init 00:27:49.709 ************************************ 00:27:49.709 11:15:18 -- common/autotest_common.sh@10 -- # set +x 00:27:49.709 11:15:18 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:49.709 11:15:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:49.709 11:15:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.709 11:15:18 -- common/autotest_common.sh@10 -- # set +x 00:27:49.967 ************************************ 00:27:49.967 START TEST dma 00:27:49.967 ************************************ 00:27:49.967 11:15:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:49.967 * Looking for test storage... 00:27:49.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:49.967 11:15:18 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:49.967 11:15:18 -- nvmf/common.sh@7 -- # uname -s 00:27:49.967 11:15:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.967 11:15:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.967 11:15:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.967 11:15:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.967 11:15:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.967 11:15:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.967 11:15:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.967 11:15:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.967 11:15:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.967 11:15:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.967 11:15:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:49.967 11:15:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:49.967 11:15:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.967 11:15:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.967 11:15:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:49.967 11:15:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.967 11:15:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:49.967 11:15:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.967 11:15:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.967 11:15:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.967 11:15:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.968 11:15:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.968 11:15:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.968 11:15:18 -- paths/export.sh@5 -- # export PATH 00:27:49.968 11:15:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.968 11:15:18 -- nvmf/common.sh@47 -- # : 0 00:27:49.968 11:15:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.968 11:15:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.968 11:15:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.968 11:15:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.968 11:15:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.968 11:15:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.968 11:15:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.968 11:15:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.968 11:15:18 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:49.968 11:15:18 -- host/dma.sh@13 -- # exit 0 00:27:49.968 00:27:49.968 real 0m0.119s 00:27:49.968 user 0m0.042s 00:27:49.968 sys 0m0.077s 00:27:49.968 11:15:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:49.968 11:15:18 -- common/autotest_common.sh@10 -- # set +x 00:27:49.968 ************************************ 00:27:49.968 END TEST dma 00:27:49.968 ************************************ 00:27:49.968 11:15:18 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:49.968 11:15:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:49.968 11:15:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.968 11:15:18 -- common/autotest_common.sh@10 -- # set +x 00:27:50.226 ************************************ 00:27:50.226 START TEST nvmf_identify 00:27:50.226 ************************************ 00:27:50.226 11:15:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:50.226 * Looking for test storage... 00:27:50.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:50.226 11:15:18 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:50.226 11:15:18 -- nvmf/common.sh@7 -- # uname -s 00:27:50.226 11:15:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.226 11:15:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.226 11:15:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.226 11:15:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.226 11:15:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.226 11:15:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.226 11:15:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.226 11:15:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.226 11:15:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.226 11:15:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.226 11:15:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:50.226 11:15:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:50.226 11:15:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.226 11:15:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.226 11:15:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:50.226 11:15:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.226 11:15:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:50.226 11:15:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.226 11:15:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.226 11:15:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.226 11:15:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.226 11:15:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.226 11:15:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.226 11:15:18 -- paths/export.sh@5 -- # export PATH 00:27:50.226 11:15:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.226 11:15:18 -- nvmf/common.sh@47 -- # : 0 00:27:50.226 11:15:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.226 11:15:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.226 11:15:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.226 11:15:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.226 11:15:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.226 11:15:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.227 11:15:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.227 11:15:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.227 11:15:18 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.227 11:15:18 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.227 11:15:18 -- host/identify.sh@14 -- # nvmftestinit 00:27:50.227 11:15:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:50.227 11:15:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.227 11:15:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:50.227 11:15:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:50.227 11:15:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:50.227 11:15:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.227 11:15:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.227 11:15:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.227 11:15:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:50.227 11:15:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:50.227 11:15:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:50.227 11:15:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:50.227 11:15:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:50.227 11:15:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:50.227 11:15:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.227 11:15:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.227 11:15:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:50.227 11:15:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:50.227 11:15:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:50.227 11:15:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:50.227 11:15:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:50.227 11:15:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.227 11:15:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:50.227 11:15:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:50.227 11:15:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:50.227 11:15:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:50.227 11:15:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:50.227 11:15:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:50.227 Cannot find device "nvmf_tgt_br" 00:27:50.227 11:15:18 -- nvmf/common.sh@155 -- # true 00:27:50.227 11:15:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:50.227 Cannot find device "nvmf_tgt_br2" 00:27:50.227 11:15:18 -- nvmf/common.sh@156 -- # true 00:27:50.227 11:15:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:50.227 11:15:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:50.227 Cannot find device "nvmf_tgt_br" 00:27:50.227 11:15:18 -- nvmf/common.sh@158 -- # true 00:27:50.227 11:15:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:50.227 Cannot find device "nvmf_tgt_br2" 00:27:50.227 11:15:18 -- nvmf/common.sh@159 -- # true 00:27:50.227 11:15:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:50.485 11:15:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:50.485 11:15:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:50.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.485 11:15:18 -- nvmf/common.sh@162 -- # true 00:27:50.485 11:15:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:50.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.485 11:15:18 -- nvmf/common.sh@163 -- # true 00:27:50.485 11:15:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:50.485 11:15:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:50.485 11:15:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:50.485 11:15:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:50.485 11:15:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:50.485 11:15:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:50.485 11:15:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:50.485 11:15:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:50.485 11:15:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:50.485 11:15:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:50.485 11:15:19 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:50.485 11:15:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:50.485 11:15:19 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:50.485 11:15:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:50.485 11:15:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:50.485 11:15:19 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:50.485 11:15:19 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:50.485 11:15:19 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:50.485 11:15:19 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:50.485 11:15:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:50.485 11:15:19 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:50.485 11:15:19 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:50.485 11:15:19 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:50.485 11:15:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:50.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:27:50.485 00:27:50.485 --- 10.0.0.2 ping statistics --- 00:27:50.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.485 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:50.485 11:15:19 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:50.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:50.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:27:50.485 00:27:50.485 --- 10.0.0.3 ping statistics --- 00:27:50.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.485 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:27:50.485 11:15:19 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:50.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:27:50.485 00:27:50.485 --- 10.0.0.1 ping statistics --- 00:27:50.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.485 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:50.485 11:15:19 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.485 11:15:19 -- nvmf/common.sh@422 -- # return 0 00:27:50.485 11:15:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:50.485 11:15:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.485 11:15:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:50.485 11:15:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:50.485 11:15:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.485 11:15:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:50.485 11:15:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:50.743 11:15:19 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:50.743 11:15:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:50.743 11:15:19 -- common/autotest_common.sh@10 -- # set +x 00:27:50.743 11:15:19 -- host/identify.sh@19 -- # nvmfpid=98910 00:27:50.743 11:15:19 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:50.743 11:15:19 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:50.743 11:15:19 -- host/identify.sh@23 -- # waitforlisten 98910 00:27:50.743 11:15:19 -- common/autotest_common.sh@817 -- # '[' -z 98910 ']' 00:27:50.743 11:15:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.743 11:15:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:50.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.743 11:15:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.743 11:15:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:50.743 11:15:19 -- common/autotest_common.sh@10 -- # set +x 00:27:50.743 [2024-04-18 11:15:19.213491] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:50.743 [2024-04-18 11:15:19.213603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.743 [2024-04-18 11:15:19.362575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.001 [2024-04-18 11:15:19.464112] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.001 [2024-04-18 11:15:19.464172] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.001 [2024-04-18 11:15:19.464186] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.001 [2024-04-18 11:15:19.464196] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.001 [2024-04-18 11:15:19.464205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.001 [2024-04-18 11:15:19.465024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.001 [2024-04-18 11:15:19.465206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.001 [2024-04-18 11:15:19.466196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.001 [2024-04-18 11:15:19.466206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.567 11:15:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:51.567 11:15:20 -- common/autotest_common.sh@850 -- # return 0 00:27:51.567 11:15:20 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:51.567 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.567 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.567 [2024-04-18 11:15:20.196496] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.824 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.824 11:15:20 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:51.824 11:15:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:51.824 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.824 11:15:20 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:51.824 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.824 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.824 Malloc0 00:27:51.824 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.824 11:15:20 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.824 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.824 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.824 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.824 11:15:20 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:51.824 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.824 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.824 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.824 11:15:20 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.824 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.824 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.824 [2024-04-18 11:15:20.313481] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.824 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.824 11:15:20 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:51.824 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.824 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.824 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.824 11:15:20 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:51.824 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.824 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:51.824 [2024-04-18 11:15:20.329232] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:51.824 [ 00:27:51.824 { 00:27:51.824 "allow_any_host": true, 00:27:51.824 "hosts": [], 00:27:51.825 "listen_addresses": [ 00:27:51.825 { 00:27:51.825 "adrfam": "IPv4", 00:27:51.825 "traddr": "10.0.0.2", 00:27:51.825 "transport": "TCP", 00:27:51.825 "trsvcid": "4420", 00:27:51.825 "trtype": "TCP" 00:27:51.825 } 00:27:51.825 ], 00:27:51.825 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:51.825 "subtype": "Discovery" 00:27:51.825 }, 00:27:51.825 { 00:27:51.825 "allow_any_host": true, 00:27:51.825 "hosts": [], 00:27:51.825 "listen_addresses": [ 00:27:51.825 { 00:27:51.825 "adrfam": "IPv4", 00:27:51.825 "traddr": "10.0.0.2", 00:27:51.825 "transport": "TCP", 00:27:51.825 "trsvcid": "4420", 00:27:51.825 "trtype": "TCP" 00:27:51.825 } 00:27:51.825 ], 00:27:51.825 "max_cntlid": 65519, 00:27:51.825 "max_namespaces": 32, 00:27:51.825 "min_cntlid": 1, 00:27:51.825 "model_number": "SPDK bdev Controller", 00:27:51.825 "namespaces": [ 00:27:51.825 { 00:27:51.825 "bdev_name": "Malloc0", 00:27:51.825 "eui64": "ABCDEF0123456789", 00:27:51.825 "name": "Malloc0", 00:27:51.825 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:51.825 "nsid": 1, 00:27:51.825 "uuid": "270c0006-9892-4ad7-adfd-bb6d42ebcfea" 00:27:51.825 } 00:27:51.825 ], 00:27:51.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.825 "serial_number": "SPDK00000000000001", 00:27:51.825 "subtype": "NVMe" 00:27:51.825 } 00:27:51.825 ] 00:27:51.825 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.825 11:15:20 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:51.825 [2024-04-18 11:15:20.368479] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:51.825 [2024-04-18 11:15:20.368530] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98963 ] 00:27:52.085 [2024-04-18 11:15:20.507548] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:52.085 [2024-04-18 11:15:20.507638] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:52.085 [2024-04-18 11:15:20.507645] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:52.085 [2024-04-18 11:15:20.507660] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:52.085 [2024-04-18 11:15:20.507671] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:52.085 [2024-04-18 11:15:20.507824] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:52.085 [2024-04-18 11:15:20.507875] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13c62c0 0 00:27:52.085 [2024-04-18 11:15:20.512056] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:52.085 [2024-04-18 11:15:20.512079] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:52.085 [2024-04-18 11:15:20.512085] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:52.085 [2024-04-18 11:15:20.512089] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:52.085 [2024-04-18 11:15:20.512137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.085 [2024-04-18 11:15:20.512144] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.085 [2024-04-18 11:15:20.512148] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.085 [2024-04-18 11:15:20.512163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:52.085 [2024-04-18 11:15:20.512193] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.085 [2024-04-18 11:15:20.520055] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.085 [2024-04-18 11:15:20.520077] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.085 [2024-04-18 11:15:20.520082] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.085 [2024-04-18 11:15:20.520088] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.085 [2024-04-18 11:15:20.520100] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:52.085 [2024-04-18 11:15:20.520109] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:52.085 [2024-04-18 11:15:20.520115] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:52.086 [2024-04-18 11:15:20.520134] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520140] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520144] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.520154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.086 [2024-04-18 11:15:20.520183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.086 [2024-04-18 11:15:20.520305] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.086 [2024-04-18 11:15:20.520328] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.086 [2024-04-18 11:15:20.520332] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520336] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.086 [2024-04-18 11:15:20.520348] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:52.086 [2024-04-18 11:15:20.520356] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:52.086 [2024-04-18 11:15:20.520365] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520369] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520373] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.520382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.086 [2024-04-18 11:15:20.520403] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.086 [2024-04-18 11:15:20.520500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.086 [2024-04-18 11:15:20.520507] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.086 [2024-04-18 11:15:20.520510] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520515] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.086 [2024-04-18 11:15:20.520522] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:52.086 [2024-04-18 11:15:20.520531] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:52.086 [2024-04-18 11:15:20.520538] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520543] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520546] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.520554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.086 [2024-04-18 11:15:20.520573] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.086 [2024-04-18 11:15:20.520665] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.086 [2024-04-18 11:15:20.520676] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.086 [2024-04-18 11:15:20.520680] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520684] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.086 [2024-04-18 11:15:20.520691] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:52.086 [2024-04-18 11:15:20.520702] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520707] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520710] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.520718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.086 [2024-04-18 11:15:20.520736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.086 [2024-04-18 11:15:20.520833] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.086 [2024-04-18 11:15:20.520854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.086 [2024-04-18 11:15:20.520858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.520862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.086 [2024-04-18 11:15:20.520869] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:52.086 [2024-04-18 11:15:20.520874] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:52.086 [2024-04-18 11:15:20.520883] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:52.086 [2024-04-18 11:15:20.520988] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:52.086 [2024-04-18 11:15:20.520994] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:52.086 [2024-04-18 11:15:20.521004] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.521020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.086 [2024-04-18 11:15:20.521051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.086 [2024-04-18 11:15:20.521153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.086 [2024-04-18 11:15:20.521160] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.086 [2024-04-18 11:15:20.521164] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521168] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.086 [2024-04-18 11:15:20.521175] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:52.086 [2024-04-18 11:15:20.521185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521190] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521194] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.521202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.086 [2024-04-18 11:15:20.521220] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.086 [2024-04-18 11:15:20.521317] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.086 [2024-04-18 11:15:20.521325] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.086 [2024-04-18 11:15:20.521329] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521333] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.086 [2024-04-18 11:15:20.521340] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:52.086 [2024-04-18 11:15:20.521345] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:52.086 [2024-04-18 11:15:20.521353] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:52.086 [2024-04-18 11:15:20.521364] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:52.086 [2024-04-18 11:15:20.521374] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521378] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.521386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.086 [2024-04-18 11:15:20.521405] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.086 [2024-04-18 11:15:20.521558] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.086 [2024-04-18 11:15:20.521573] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.086 [2024-04-18 11:15:20.521578] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521582] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c62c0): datao=0, datal=4096, cccid=0 00:27:52.086 [2024-04-18 11:15:20.521587] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140f590) on tqpair(0x13c62c0): expected_datao=0, payload_size=4096 00:27:52.086 [2024-04-18 11:15:20.521593] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521602] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521606] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521616] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.086 [2024-04-18 11:15:20.521622] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.086 [2024-04-18 11:15:20.521626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521630] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.086 [2024-04-18 11:15:20.521640] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:52.086 [2024-04-18 11:15:20.521646] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:52.086 [2024-04-18 11:15:20.521651] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:52.086 [2024-04-18 11:15:20.521661] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:52.086 [2024-04-18 11:15:20.521666] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:52.086 [2024-04-18 11:15:20.521672] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:52.086 [2024-04-18 11:15:20.521681] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:52.086 [2024-04-18 11:15:20.521690] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521694] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.086 [2024-04-18 11:15:20.521698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.086 [2024-04-18 11:15:20.521713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:52.087 [2024-04-18 11:15:20.521734] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.087 [2024-04-18 11:15:20.521849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.087 [2024-04-18 11:15:20.521856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.087 [2024-04-18 11:15:20.521860] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f590) on tqpair=0x13c62c0 00:27:52.087 [2024-04-18 11:15:20.521874] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.521889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.087 [2024-04-18 11:15:20.521896] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521901] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521905] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.521911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.087 [2024-04-18 11:15:20.521918] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521922] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521926] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.521932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.087 [2024-04-18 11:15:20.521938] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521942] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521946] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.521952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.087 [2024-04-18 11:15:20.521957] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:52.087 [2024-04-18 11:15:20.521971] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:52.087 [2024-04-18 11:15:20.521978] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.521983] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.521990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.087 [2024-04-18 11:15:20.522020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f590, cid 0, qid 0 00:27:52.087 [2024-04-18 11:15:20.522027] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f6f0, cid 1, qid 0 00:27:52.087 [2024-04-18 11:15:20.522045] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f850, cid 2, qid 0 00:27:52.087 [2024-04-18 11:15:20.522051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.087 [2024-04-18 11:15:20.522055] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140fb10, cid 4, qid 0 00:27:52.087 [2024-04-18 11:15:20.522235] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.087 [2024-04-18 11:15:20.522249] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.087 [2024-04-18 11:15:20.522253] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522257] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140fb10) on tqpair=0x13c62c0 00:27:52.087 [2024-04-18 11:15:20.522264] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:52.087 [2024-04-18 11:15:20.522271] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:52.087 [2024-04-18 11:15:20.522283] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522288] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.522295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.087 [2024-04-18 11:15:20.522316] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140fb10, cid 4, qid 0 00:27:52.087 [2024-04-18 11:15:20.522428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.087 [2024-04-18 11:15:20.522435] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.087 [2024-04-18 11:15:20.522439] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522442] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c62c0): datao=0, datal=4096, cccid=4 00:27:52.087 [2024-04-18 11:15:20.522447] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140fb10) on tqpair(0x13c62c0): expected_datao=0, payload_size=4096 00:27:52.087 [2024-04-18 11:15:20.522452] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522460] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522464] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522481] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.087 [2024-04-18 11:15:20.522487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.087 [2024-04-18 11:15:20.522491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522495] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140fb10) on tqpair=0x13c62c0 00:27:52.087 [2024-04-18 11:15:20.522520] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:52.087 [2024-04-18 11:15:20.522549] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522555] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.522562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.087 [2024-04-18 11:15:20.522570] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522574] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522578] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.522585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.087 [2024-04-18 11:15:20.522612] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140fb10, cid 4, qid 0 00:27:52.087 [2024-04-18 11:15:20.522620] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140fc70, cid 5, qid 0 00:27:52.087 [2024-04-18 11:15:20.522776] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.087 [2024-04-18 11:15:20.522783] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.087 [2024-04-18 11:15:20.522787] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522791] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c62c0): datao=0, datal=1024, cccid=4 00:27:52.087 [2024-04-18 11:15:20.522796] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140fb10) on tqpair(0x13c62c0): expected_datao=0, payload_size=1024 00:27:52.087 [2024-04-18 11:15:20.522800] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522807] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522811] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522817] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.087 [2024-04-18 11:15:20.522823] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.087 [2024-04-18 11:15:20.522827] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.522831] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140fc70) on tqpair=0x13c62c0 00:27:52.087 [2024-04-18 11:15:20.563111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.087 [2024-04-18 11:15:20.563142] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.087 [2024-04-18 11:15:20.563148] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563153] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140fb10) on tqpair=0x13c62c0 00:27:52.087 [2024-04-18 11:15:20.563174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563179] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.563191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.087 [2024-04-18 11:15:20.563242] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140fb10, cid 4, qid 0 00:27:52.087 [2024-04-18 11:15:20.563399] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.087 [2024-04-18 11:15:20.563406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.087 [2024-04-18 11:15:20.563410] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563414] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c62c0): datao=0, datal=3072, cccid=4 00:27:52.087 [2024-04-18 11:15:20.563419] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140fb10) on tqpair(0x13c62c0): expected_datao=0, payload_size=3072 00:27:52.087 [2024-04-18 11:15:20.563424] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563433] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563437] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.087 [2024-04-18 11:15:20.563458] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.087 [2024-04-18 11:15:20.563462] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563466] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140fb10) on tqpair=0x13c62c0 00:27:52.087 [2024-04-18 11:15:20.563478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563482] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13c62c0) 00:27:52.087 [2024-04-18 11:15:20.563490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.087 [2024-04-18 11:15:20.563516] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140fb10, cid 4, qid 0 00:27:52.087 [2024-04-18 11:15:20.563642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.087 [2024-04-18 11:15:20.563653] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.087 [2024-04-18 11:15:20.563658] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.087 [2024-04-18 11:15:20.563662] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13c62c0): datao=0, datal=8, cccid=4 00:27:52.087 [2024-04-18 11:15:20.563667] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140fb10) on tqpair(0x13c62c0): expected_datao=0, payload_size=8 00:27:52.088 [2024-04-18 11:15:20.563672] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.088 [2024-04-18 11:15:20.563679] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.088 [2024-04-18 11:15:20.563683] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.088 [2024-04-18 11:15:20.605080] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.088 [2024-04-18 11:15:20.605125] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.088 [2024-04-18 11:15:20.605131] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.088 [2024-04-18 11:15:20.605137] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140fb10) on tqpair=0x13c62c0 00:27:52.088 ===================================================== 00:27:52.088 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:52.088 ===================================================== 00:27:52.088 Controller Capabilities/Features 00:27:52.088 ================================ 00:27:52.088 Vendor ID: 0000 00:27:52.088 Subsystem Vendor ID: 0000 00:27:52.088 Serial Number: .................... 00:27:52.088 Model Number: ........................................ 00:27:52.088 Firmware Version: 24.05 00:27:52.088 Recommended Arb Burst: 0 00:27:52.088 IEEE OUI Identifier: 00 00 00 00:27:52.088 Multi-path I/O 00:27:52.088 May have multiple subsystem ports: No 00:27:52.088 May have multiple controllers: No 00:27:52.088 Associated with SR-IOV VF: No 00:27:52.088 Max Data Transfer Size: 131072 00:27:52.088 Max Number of Namespaces: 0 00:27:52.088 Max Number of I/O Queues: 1024 00:27:52.088 NVMe Specification Version (VS): 1.3 00:27:52.088 NVMe Specification Version (Identify): 1.3 00:27:52.088 Maximum Queue Entries: 128 00:27:52.088 Contiguous Queues Required: Yes 00:27:52.088 Arbitration Mechanisms Supported 00:27:52.088 Weighted Round Robin: Not Supported 00:27:52.088 Vendor Specific: Not Supported 00:27:52.088 Reset Timeout: 15000 ms 00:27:52.088 Doorbell Stride: 4 bytes 00:27:52.088 NVM Subsystem Reset: Not Supported 00:27:52.088 Command Sets Supported 00:27:52.088 NVM Command Set: Supported 00:27:52.088 Boot Partition: Not Supported 00:27:52.088 Memory Page Size Minimum: 4096 bytes 00:27:52.088 Memory Page Size Maximum: 4096 bytes 00:27:52.088 Persistent Memory Region: Not Supported 00:27:52.088 Optional Asynchronous Events Supported 00:27:52.088 Namespace Attribute Notices: Not Supported 00:27:52.088 Firmware Activation Notices: Not Supported 00:27:52.088 ANA Change Notices: Not Supported 00:27:52.088 PLE Aggregate Log Change Notices: Not Supported 00:27:52.088 LBA Status Info Alert Notices: Not Supported 00:27:52.088 EGE Aggregate Log Change Notices: Not Supported 00:27:52.088 Normal NVM Subsystem Shutdown event: Not Supported 00:27:52.088 Zone Descriptor Change Notices: Not Supported 00:27:52.088 Discovery Log Change Notices: Supported 00:27:52.088 Controller Attributes 00:27:52.088 128-bit Host Identifier: Not Supported 00:27:52.088 Non-Operational Permissive Mode: Not Supported 00:27:52.088 NVM Sets: Not Supported 00:27:52.088 Read Recovery Levels: Not Supported 00:27:52.088 Endurance Groups: Not Supported 00:27:52.088 Predictable Latency Mode: Not Supported 00:27:52.088 Traffic Based Keep ALive: Not Supported 00:27:52.088 Namespace Granularity: Not Supported 00:27:52.088 SQ Associations: Not Supported 00:27:52.088 UUID List: Not Supported 00:27:52.088 Multi-Domain Subsystem: Not Supported 00:27:52.088 Fixed Capacity Management: Not Supported 00:27:52.088 Variable Capacity Management: Not Supported 00:27:52.088 Delete Endurance Group: Not Supported 00:27:52.088 Delete NVM Set: Not Supported 00:27:52.088 Extended LBA Formats Supported: Not Supported 00:27:52.088 Flexible Data Placement Supported: Not Supported 00:27:52.088 00:27:52.088 Controller Memory Buffer Support 00:27:52.088 ================================ 00:27:52.088 Supported: No 00:27:52.088 00:27:52.088 Persistent Memory Region Support 00:27:52.088 ================================ 00:27:52.088 Supported: No 00:27:52.088 00:27:52.088 Admin Command Set Attributes 00:27:52.088 ============================ 00:27:52.088 Security Send/Receive: Not Supported 00:27:52.088 Format NVM: Not Supported 00:27:52.088 Firmware Activate/Download: Not Supported 00:27:52.088 Namespace Management: Not Supported 00:27:52.088 Device Self-Test: Not Supported 00:27:52.088 Directives: Not Supported 00:27:52.088 NVMe-MI: Not Supported 00:27:52.088 Virtualization Management: Not Supported 00:27:52.088 Doorbell Buffer Config: Not Supported 00:27:52.088 Get LBA Status Capability: Not Supported 00:27:52.088 Command & Feature Lockdown Capability: Not Supported 00:27:52.088 Abort Command Limit: 1 00:27:52.088 Async Event Request Limit: 4 00:27:52.088 Number of Firmware Slots: N/A 00:27:52.088 Firmware Slot 1 Read-Only: N/A 00:27:52.088 Firmware Activation Without Reset: N/A 00:27:52.088 Multiple Update Detection Support: N/A 00:27:52.088 Firmware Update Granularity: No Information Provided 00:27:52.088 Per-Namespace SMART Log: No 00:27:52.088 Asymmetric Namespace Access Log Page: Not Supported 00:27:52.088 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:52.088 Command Effects Log Page: Not Supported 00:27:52.088 Get Log Page Extended Data: Supported 00:27:52.088 Telemetry Log Pages: Not Supported 00:27:52.088 Persistent Event Log Pages: Not Supported 00:27:52.088 Supported Log Pages Log Page: May Support 00:27:52.088 Commands Supported & Effects Log Page: Not Supported 00:27:52.088 Feature Identifiers & Effects Log Page:May Support 00:27:52.088 NVMe-MI Commands & Effects Log Page: May Support 00:27:52.088 Data Area 4 for Telemetry Log: Not Supported 00:27:52.088 Error Log Page Entries Supported: 128 00:27:52.088 Keep Alive: Not Supported 00:27:52.088 00:27:52.088 NVM Command Set Attributes 00:27:52.088 ========================== 00:27:52.088 Submission Queue Entry Size 00:27:52.088 Max: 1 00:27:52.088 Min: 1 00:27:52.088 Completion Queue Entry Size 00:27:52.088 Max: 1 00:27:52.088 Min: 1 00:27:52.088 Number of Namespaces: 0 00:27:52.088 Compare Command: Not Supported 00:27:52.088 Write Uncorrectable Command: Not Supported 00:27:52.088 Dataset Management Command: Not Supported 00:27:52.088 Write Zeroes Command: Not Supported 00:27:52.088 Set Features Save Field: Not Supported 00:27:52.088 Reservations: Not Supported 00:27:52.088 Timestamp: Not Supported 00:27:52.088 Copy: Not Supported 00:27:52.088 Volatile Write Cache: Not Present 00:27:52.088 Atomic Write Unit (Normal): 1 00:27:52.088 Atomic Write Unit (PFail): 1 00:27:52.088 Atomic Compare & Write Unit: 1 00:27:52.088 Fused Compare & Write: Supported 00:27:52.088 Scatter-Gather List 00:27:52.088 SGL Command Set: Supported 00:27:52.088 SGL Keyed: Supported 00:27:52.088 SGL Bit Bucket Descriptor: Not Supported 00:27:52.088 SGL Metadata Pointer: Not Supported 00:27:52.088 Oversized SGL: Not Supported 00:27:52.088 SGL Metadata Address: Not Supported 00:27:52.088 SGL Offset: Supported 00:27:52.088 Transport SGL Data Block: Not Supported 00:27:52.088 Replay Protected Memory Block: Not Supported 00:27:52.088 00:27:52.088 Firmware Slot Information 00:27:52.088 ========================= 00:27:52.088 Active slot: 0 00:27:52.088 00:27:52.088 00:27:52.088 Error Log 00:27:52.088 ========= 00:27:52.088 00:27:52.088 Active Namespaces 00:27:52.088 ================= 00:27:52.088 Discovery Log Page 00:27:52.088 ================== 00:27:52.088 Generation Counter: 2 00:27:52.088 Number of Records: 2 00:27:52.088 Record Format: 0 00:27:52.088 00:27:52.088 Discovery Log Entry 0 00:27:52.088 ---------------------- 00:27:52.088 Transport Type: 3 (TCP) 00:27:52.088 Address Family: 1 (IPv4) 00:27:52.088 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:52.088 Entry Flags: 00:27:52.088 Duplicate Returned Information: 1 00:27:52.088 Explicit Persistent Connection Support for Discovery: 1 00:27:52.088 Transport Requirements: 00:27:52.088 Secure Channel: Not Required 00:27:52.088 Port ID: 0 (0x0000) 00:27:52.088 Controller ID: 65535 (0xffff) 00:27:52.088 Admin Max SQ Size: 128 00:27:52.088 Transport Service Identifier: 4420 00:27:52.088 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:52.088 Transport Address: 10.0.0.2 00:27:52.088 Discovery Log Entry 1 00:27:52.088 ---------------------- 00:27:52.088 Transport Type: 3 (TCP) 00:27:52.088 Address Family: 1 (IPv4) 00:27:52.088 Subsystem Type: 2 (NVM Subsystem) 00:27:52.088 Entry Flags: 00:27:52.088 Duplicate Returned Information: 0 00:27:52.088 Explicit Persistent Connection Support for Discovery: 0 00:27:52.088 Transport Requirements: 00:27:52.088 Secure Channel: Not Required 00:27:52.088 Port ID: 0 (0x0000) 00:27:52.088 Controller ID: 65535 (0xffff) 00:27:52.088 Admin Max SQ Size: 128 00:27:52.088 Transport Service Identifier: 4420 00:27:52.089 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:52.089 Transport Address: 10.0.0.2 [2024-04-18 11:15:20.605276] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:52.089 [2024-04-18 11:15:20.605295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.089 [2024-04-18 11:15:20.605304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.089 [2024-04-18 11:15:20.605310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.089 [2024-04-18 11:15:20.605316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.089 [2024-04-18 11:15:20.605330] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605335] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605339] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.605351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.605386] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.605509] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.605516] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.605520] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605524] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.605540] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605546] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605549] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.605558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.605583] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.605705] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.605722] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.605727] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605731] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.605738] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:52.089 [2024-04-18 11:15:20.605743] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:52.089 [2024-04-18 11:15:20.605754] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605759] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.605770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.605790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.605883] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.605890] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.605894] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605898] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.605910] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605914] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.605918] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.605926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.605943] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.606051] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.606060] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.606064] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606068] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.606080] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606085] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606089] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.606096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.606123] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.606218] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.606225] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.606229] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606233] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.606244] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606248] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606252] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.606260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.606277] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.606366] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.606373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.606376] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606381] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.606391] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606396] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.606407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.606424] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.606520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.606531] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.606535] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606539] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.606551] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606555] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606559] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.606567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.606585] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.606676] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.606686] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.606691] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606695] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.606706] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.606722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.606750] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.606845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.606856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.606860] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.606876] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.606884] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.606892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.606910] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.607001] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.607012] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.607016] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.607020] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.089 [2024-04-18 11:15:20.607041] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.607048] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.089 [2024-04-18 11:15:20.607052] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.089 [2024-04-18 11:15:20.607059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.089 [2024-04-18 11:15:20.607080] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.089 [2024-04-18 11:15:20.607170] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.089 [2024-04-18 11:15:20.607185] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.089 [2024-04-18 11:15:20.607189] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607194] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.607214] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607220] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607224] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.607231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.607251] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.607356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.607363] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.607367] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607371] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.607382] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607387] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607390] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.607398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.607416] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.607507] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.607513] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.607517] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607521] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.607532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607537] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607540] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.607548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.607565] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.607652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.607663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.607667] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.607683] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607687] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.607699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.607717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.607802] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.607808] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.607812] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607816] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.607827] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607832] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607836] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.607843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.607868] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.607957] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.607964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.607967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607971] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.607982] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607987] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.607991] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.607998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.608015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.608120] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.608134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.608139] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608143] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.608155] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608160] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608164] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.608171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.608192] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.608286] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.608294] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.608298] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608302] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.608313] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608322] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.608329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.090 [2024-04-18 11:15:20.608347] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.090 [2024-04-18 11:15:20.608437] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.090 [2024-04-18 11:15:20.608445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.090 [2024-04-18 11:15:20.608448] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608452] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.090 [2024-04-18 11:15:20.608463] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608468] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.090 [2024-04-18 11:15:20.608472] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.090 [2024-04-18 11:15:20.608479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.091 [2024-04-18 11:15:20.608496] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.091 [2024-04-18 11:15:20.608596] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.091 [2024-04-18 11:15:20.608603] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.091 [2024-04-18 11:15:20.608607] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608611] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.091 [2024-04-18 11:15:20.608622] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608630] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.091 [2024-04-18 11:15:20.608638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.091 [2024-04-18 11:15:20.608655] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.091 [2024-04-18 11:15:20.608742] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.091 [2024-04-18 11:15:20.608756] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.091 [2024-04-18 11:15:20.608760] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608765] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.091 [2024-04-18 11:15:20.608776] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608781] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608785] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.091 [2024-04-18 11:15:20.608792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.091 [2024-04-18 11:15:20.608811] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.091 [2024-04-18 11:15:20.608904] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.091 [2024-04-18 11:15:20.608914] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.091 [2024-04-18 11:15:20.608918] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608922] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.091 [2024-04-18 11:15:20.608934] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608938] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.608942] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.091 [2024-04-18 11:15:20.608950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.091 [2024-04-18 11:15:20.608968] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.091 [2024-04-18 11:15:20.613052] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.091 [2024-04-18 11:15:20.613073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.091 [2024-04-18 11:15:20.613078] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.613082] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.091 [2024-04-18 11:15:20.613098] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.613103] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.613107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13c62c0) 00:27:52.091 [2024-04-18 11:15:20.613116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.091 [2024-04-18 11:15:20.613142] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140f9b0, cid 3, qid 0 00:27:52.091 [2024-04-18 11:15:20.613241] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.091 [2024-04-18 11:15:20.613248] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.091 [2024-04-18 11:15:20.613252] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.091 [2024-04-18 11:15:20.613256] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140f9b0) on tqpair=0x13c62c0 00:27:52.091 [2024-04-18 11:15:20.613265] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:52.091 00:27:52.091 11:15:20 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:52.091 [2024-04-18 11:15:20.654368] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:52.091 [2024-04-18 11:15:20.654440] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98969 ] 00:27:52.353 [2024-04-18 11:15:20.799307] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:52.353 [2024-04-18 11:15:20.799381] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:52.353 [2024-04-18 11:15:20.799389] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:52.353 [2024-04-18 11:15:20.799402] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:52.353 [2024-04-18 11:15:20.799413] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:52.353 [2024-04-18 11:15:20.799561] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:52.354 [2024-04-18 11:15:20.799611] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11f42c0 0 00:27:52.354 [2024-04-18 11:15:20.812075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:52.354 [2024-04-18 11:15:20.812125] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:52.354 [2024-04-18 11:15:20.812141] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:52.354 [2024-04-18 11:15:20.812146] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:52.354 [2024-04-18 11:15:20.812211] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.812219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.812224] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.812241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:52.354 [2024-04-18 11:15:20.812278] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.820068] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.820098] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.820104] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820110] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.354 [2024-04-18 11:15:20.820128] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:52.354 [2024-04-18 11:15:20.820147] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:52.354 [2024-04-18 11:15:20.820154] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:52.354 [2024-04-18 11:15:20.820177] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820183] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820187] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.820201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.354 [2024-04-18 11:15:20.820233] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.820324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.820332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.820336] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820341] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.354 [2024-04-18 11:15:20.820352] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:52.354 [2024-04-18 11:15:20.820361] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:52.354 [2024-04-18 11:15:20.820369] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820374] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820378] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.820386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.354 [2024-04-18 11:15:20.820406] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.820470] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.820477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.820481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820485] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.354 [2024-04-18 11:15:20.820493] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:52.354 [2024-04-18 11:15:20.820502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:52.354 [2024-04-18 11:15:20.820510] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820515] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820519] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.820527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.354 [2024-04-18 11:15:20.820545] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.820600] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.820607] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.820611] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820616] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.354 [2024-04-18 11:15:20.820623] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:52.354 [2024-04-18 11:15:20.820634] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820639] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820643] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.820651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.354 [2024-04-18 11:15:20.820669] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.820731] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.820738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.820742] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820747] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.354 [2024-04-18 11:15:20.820753] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:52.354 [2024-04-18 11:15:20.820759] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:52.354 [2024-04-18 11:15:20.820768] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:52.354 [2024-04-18 11:15:20.820874] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:52.354 [2024-04-18 11:15:20.820887] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:52.354 [2024-04-18 11:15:20.820899] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820904] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.820908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.820916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.354 [2024-04-18 11:15:20.820936] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.820992] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.821011] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.821016] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821020] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.354 [2024-04-18 11:15:20.821027] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:52.354 [2024-04-18 11:15:20.821051] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821058] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821062] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.821070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.354 [2024-04-18 11:15:20.821090] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.821150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.821157] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.821161] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821165] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.354 [2024-04-18 11:15:20.821171] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:52.354 [2024-04-18 11:15:20.821177] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:52.354 [2024-04-18 11:15:20.821185] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:52.354 [2024-04-18 11:15:20.821196] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:52.354 [2024-04-18 11:15:20.821208] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821212] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.354 [2024-04-18 11:15:20.821221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.354 [2024-04-18 11:15:20.821240] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.354 [2024-04-18 11:15:20.821353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.354 [2024-04-18 11:15:20.821360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.354 [2024-04-18 11:15:20.821365] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821369] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=4096, cccid=0 00:27:52.354 [2024-04-18 11:15:20.821374] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123d590) on tqpair(0x11f42c0): expected_datao=0, payload_size=4096 00:27:52.354 [2024-04-18 11:15:20.821380] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821390] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821395] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.354 [2024-04-18 11:15:20.821410] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.354 [2024-04-18 11:15:20.821414] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.354 [2024-04-18 11:15:20.821419] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.355 [2024-04-18 11:15:20.821429] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:52.355 [2024-04-18 11:15:20.821435] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:52.355 [2024-04-18 11:15:20.821440] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:52.355 [2024-04-18 11:15:20.821450] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:52.355 [2024-04-18 11:15:20.821455] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:52.355 [2024-04-18 11:15:20.821461] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.821471] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.821480] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821485] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821489] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.821497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:52.355 [2024-04-18 11:15:20.821518] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.355 [2024-04-18 11:15:20.821582] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.355 [2024-04-18 11:15:20.821589] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.355 [2024-04-18 11:15:20.821593] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821598] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d590) on tqpair=0x11f42c0 00:27:52.355 [2024-04-18 11:15:20.821607] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821611] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.821623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.355 [2024-04-18 11:15:20.821631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821639] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.821645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.355 [2024-04-18 11:15:20.821652] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821656] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821660] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.821667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.355 [2024-04-18 11:15:20.821674] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821678] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.821688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.355 [2024-04-18 11:15:20.821694] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.821714] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.821722] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821726] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.821733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.355 [2024-04-18 11:15:20.821754] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d590, cid 0, qid 0 00:27:52.355 [2024-04-18 11:15:20.821762] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d6f0, cid 1, qid 0 00:27:52.355 [2024-04-18 11:15:20.821767] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d850, cid 2, qid 0 00:27:52.355 [2024-04-18 11:15:20.821772] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.355 [2024-04-18 11:15:20.821777] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123db10, cid 4, qid 0 00:27:52.355 [2024-04-18 11:15:20.821876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.355 [2024-04-18 11:15:20.821883] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.355 [2024-04-18 11:15:20.821887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821891] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123db10) on tqpair=0x11f42c0 00:27:52.355 [2024-04-18 11:15:20.821898] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:52.355 [2024-04-18 11:15:20.821904] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.821914] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.821921] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.821928] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821933] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.821937] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.821944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:52.355 [2024-04-18 11:15:20.821963] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123db10, cid 4, qid 0 00:27:52.355 [2024-04-18 11:15:20.822022] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.355 [2024-04-18 11:15:20.822029] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.355 [2024-04-18 11:15:20.822045] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123db10) on tqpair=0x11f42c0 00:27:52.355 [2024-04-18 11:15:20.822104] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.822116] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.822125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822129] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.822137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.355 [2024-04-18 11:15:20.822158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123db10, cid 4, qid 0 00:27:52.355 [2024-04-18 11:15:20.822232] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.355 [2024-04-18 11:15:20.822244] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.355 [2024-04-18 11:15:20.822252] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822256] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=4096, cccid=4 00:27:52.355 [2024-04-18 11:15:20.822261] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123db10) on tqpair(0x11f42c0): expected_datao=0, payload_size=4096 00:27:52.355 [2024-04-18 11:15:20.822266] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822274] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822279] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822288] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.355 [2024-04-18 11:15:20.822294] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.355 [2024-04-18 11:15:20.822298] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822302] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123db10) on tqpair=0x11f42c0 00:27:52.355 [2024-04-18 11:15:20.822314] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:52.355 [2024-04-18 11:15:20.822328] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.822339] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.822347] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f42c0) 00:27:52.355 [2024-04-18 11:15:20.822360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.355 [2024-04-18 11:15:20.822381] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123db10, cid 4, qid 0 00:27:52.355 [2024-04-18 11:15:20.822468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.355 [2024-04-18 11:15:20.822475] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.355 [2024-04-18 11:15:20.822479] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822483] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=4096, cccid=4 00:27:52.355 [2024-04-18 11:15:20.822488] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123db10) on tqpair(0x11f42c0): expected_datao=0, payload_size=4096 00:27:52.355 [2024-04-18 11:15:20.822493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822500] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822505] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822514] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.355 [2024-04-18 11:15:20.822520] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.355 [2024-04-18 11:15:20.822524] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.355 [2024-04-18 11:15:20.822528] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123db10) on tqpair=0x11f42c0 00:27:52.355 [2024-04-18 11:15:20.822545] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.822556] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:52.355 [2024-04-18 11:15:20.822565] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822569] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.822577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.822597] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123db10, cid 4, qid 0 00:27:52.356 [2024-04-18 11:15:20.822662] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.356 [2024-04-18 11:15:20.822674] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.356 [2024-04-18 11:15:20.822679] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822683] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=4096, cccid=4 00:27:52.356 [2024-04-18 11:15:20.822688] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123db10) on tqpair(0x11f42c0): expected_datao=0, payload_size=4096 00:27:52.356 [2024-04-18 11:15:20.822693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822700] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822705] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822714] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.356 [2024-04-18 11:15:20.822720] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.356 [2024-04-18 11:15:20.822724] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822728] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123db10) on tqpair=0x11f42c0 00:27:52.356 [2024-04-18 11:15:20.822738] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:52.356 [2024-04-18 11:15:20.822748] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:52.356 [2024-04-18 11:15:20.822759] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:52.356 [2024-04-18 11:15:20.822766] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:52.356 [2024-04-18 11:15:20.822772] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:52.356 [2024-04-18 11:15:20.822778] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:52.356 [2024-04-18 11:15:20.822783] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:52.356 [2024-04-18 11:15:20.822789] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:52.356 [2024-04-18 11:15:20.822830] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822842] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.822851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.822859] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822864] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.822868] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.822875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.356 [2024-04-18 11:15:20.822909] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123db10, cid 4, qid 0 00:27:52.356 [2024-04-18 11:15:20.822917] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123dc70, cid 5, qid 0 00:27:52.356 [2024-04-18 11:15:20.823015] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.356 [2024-04-18 11:15:20.823028] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.356 [2024-04-18 11:15:20.823048] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823053] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123db10) on tqpair=0x11f42c0 00:27:52.356 [2024-04-18 11:15:20.823062] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.356 [2024-04-18 11:15:20.823068] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.356 [2024-04-18 11:15:20.823072] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823076] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123dc70) on tqpair=0x11f42c0 00:27:52.356 [2024-04-18 11:15:20.823089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823094] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.823102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.823130] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123dc70, cid 5, qid 0 00:27:52.356 [2024-04-18 11:15:20.823199] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.356 [2024-04-18 11:15:20.823225] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.356 [2024-04-18 11:15:20.823230] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823234] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123dc70) on tqpair=0x11f42c0 00:27:52.356 [2024-04-18 11:15:20.823247] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823252] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.823260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.823280] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123dc70, cid 5, qid 0 00:27:52.356 [2024-04-18 11:15:20.823337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.356 [2024-04-18 11:15:20.823344] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.356 [2024-04-18 11:15:20.823348] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823352] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123dc70) on tqpair=0x11f42c0 00:27:52.356 [2024-04-18 11:15:20.823364] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823368] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.823376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.823393] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123dc70, cid 5, qid 0 00:27:52.356 [2024-04-18 11:15:20.823446] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.356 [2024-04-18 11:15:20.823453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.356 [2024-04-18 11:15:20.823457] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823461] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123dc70) on tqpair=0x11f42c0 00:27:52.356 [2024-04-18 11:15:20.823476] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823481] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.823489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.823497] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823501] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.823508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.823516] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823521] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.823527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.823547] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823551] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11f42c0) 00:27:52.356 [2024-04-18 11:15:20.823558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.356 [2024-04-18 11:15:20.823577] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123dc70, cid 5, qid 0 00:27:52.356 [2024-04-18 11:15:20.823584] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123db10, cid 4, qid 0 00:27:52.356 [2024-04-18 11:15:20.823589] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123ddd0, cid 6, qid 0 00:27:52.356 [2024-04-18 11:15:20.823594] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123df30, cid 7, qid 0 00:27:52.356 [2024-04-18 11:15:20.823756] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.356 [2024-04-18 11:15:20.823771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.356 [2024-04-18 11:15:20.823776] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823780] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=8192, cccid=5 00:27:52.356 [2024-04-18 11:15:20.823786] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123dc70) on tqpair(0x11f42c0): expected_datao=0, payload_size=8192 00:27:52.356 [2024-04-18 11:15:20.823791] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823810] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823815] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.356 [2024-04-18 11:15:20.823828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.356 [2024-04-18 11:15:20.823831] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823835] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=512, cccid=4 00:27:52.356 [2024-04-18 11:15:20.823840] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123db10) on tqpair(0x11f42c0): expected_datao=0, payload_size=512 00:27:52.356 [2024-04-18 11:15:20.823845] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823852] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823856] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823862] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.356 [2024-04-18 11:15:20.823868] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.356 [2024-04-18 11:15:20.823871] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.356 [2024-04-18 11:15:20.823875] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=512, cccid=6 00:27:52.356 [2024-04-18 11:15:20.823880] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123ddd0) on tqpair(0x11f42c0): expected_datao=0, payload_size=512 00:27:52.356 [2024-04-18 11:15:20.823885] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823892] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823896] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823902] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:52.357 [2024-04-18 11:15:20.823908] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:52.357 [2024-04-18 11:15:20.823912] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823916] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11f42c0): datao=0, datal=4096, cccid=7 00:27:52.357 [2024-04-18 11:15:20.823921] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123df30) on tqpair(0x11f42c0): expected_datao=0, payload_size=4096 00:27:52.357 [2024-04-18 11:15:20.823925] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823933] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823937] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.357 [2024-04-18 11:15:20.823949] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.357 [2024-04-18 11:15:20.823952] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823957] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123dc70) on tqpair=0x11f42c0 00:27:52.357 [2024-04-18 11:15:20.823979] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.357 [2024-04-18 11:15:20.823986] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.357 [2024-04-18 11:15:20.823990] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.823994] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123db10) on tqpair=0x11f42c0 00:27:52.357 [2024-04-18 11:15:20.824006] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.357 [2024-04-18 11:15:20.824012] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.357 [2024-04-18 11:15:20.824016] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.824021] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123ddd0) on tqpair=0x11f42c0 00:27:52.357 [2024-04-18 11:15:20.824029] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.357 [2024-04-18 11:15:20.828062] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.357 [2024-04-18 11:15:20.828067] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.357 [2024-04-18 11:15:20.828072] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123df30) on tqpair=0x11f42c0 00:27:52.357 ===================================================== 00:27:52.357 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.357 ===================================================== 00:27:52.357 Controller Capabilities/Features 00:27:52.357 ================================ 00:27:52.357 Vendor ID: 8086 00:27:52.357 Subsystem Vendor ID: 8086 00:27:52.357 Serial Number: SPDK00000000000001 00:27:52.357 Model Number: SPDK bdev Controller 00:27:52.357 Firmware Version: 24.05 00:27:52.357 Recommended Arb Burst: 6 00:27:52.357 IEEE OUI Identifier: e4 d2 5c 00:27:52.357 Multi-path I/O 00:27:52.357 May have multiple subsystem ports: Yes 00:27:52.357 May have multiple controllers: Yes 00:27:52.357 Associated with SR-IOV VF: No 00:27:52.357 Max Data Transfer Size: 131072 00:27:52.357 Max Number of Namespaces: 32 00:27:52.357 Max Number of I/O Queues: 127 00:27:52.357 NVMe Specification Version (VS): 1.3 00:27:52.357 NVMe Specification Version (Identify): 1.3 00:27:52.357 Maximum Queue Entries: 128 00:27:52.357 Contiguous Queues Required: Yes 00:27:52.357 Arbitration Mechanisms Supported 00:27:52.357 Weighted Round Robin: Not Supported 00:27:52.357 Vendor Specific: Not Supported 00:27:52.357 Reset Timeout: 15000 ms 00:27:52.357 Doorbell Stride: 4 bytes 00:27:52.357 NVM Subsystem Reset: Not Supported 00:27:52.357 Command Sets Supported 00:27:52.357 NVM Command Set: Supported 00:27:52.357 Boot Partition: Not Supported 00:27:52.357 Memory Page Size Minimum: 4096 bytes 00:27:52.357 Memory Page Size Maximum: 4096 bytes 00:27:52.357 Persistent Memory Region: Not Supported 00:27:52.357 Optional Asynchronous Events Supported 00:27:52.357 Namespace Attribute Notices: Supported 00:27:52.357 Firmware Activation Notices: Not Supported 00:27:52.357 ANA Change Notices: Not Supported 00:27:52.357 PLE Aggregate Log Change Notices: Not Supported 00:27:52.357 LBA Status Info Alert Notices: Not Supported 00:27:52.357 EGE Aggregate Log Change Notices: Not Supported 00:27:52.357 Normal NVM Subsystem Shutdown event: Not Supported 00:27:52.357 Zone Descriptor Change Notices: Not Supported 00:27:52.357 Discovery Log Change Notices: Not Supported 00:27:52.357 Controller Attributes 00:27:52.357 128-bit Host Identifier: Supported 00:27:52.357 Non-Operational Permissive Mode: Not Supported 00:27:52.357 NVM Sets: Not Supported 00:27:52.357 Read Recovery Levels: Not Supported 00:27:52.357 Endurance Groups: Not Supported 00:27:52.357 Predictable Latency Mode: Not Supported 00:27:52.357 Traffic Based Keep ALive: Not Supported 00:27:52.357 Namespace Granularity: Not Supported 00:27:52.357 SQ Associations: Not Supported 00:27:52.357 UUID List: Not Supported 00:27:52.357 Multi-Domain Subsystem: Not Supported 00:27:52.357 Fixed Capacity Management: Not Supported 00:27:52.357 Variable Capacity Management: Not Supported 00:27:52.357 Delete Endurance Group: Not Supported 00:27:52.357 Delete NVM Set: Not Supported 00:27:52.357 Extended LBA Formats Supported: Not Supported 00:27:52.357 Flexible Data Placement Supported: Not Supported 00:27:52.357 00:27:52.357 Controller Memory Buffer Support 00:27:52.357 ================================ 00:27:52.357 Supported: No 00:27:52.357 00:27:52.357 Persistent Memory Region Support 00:27:52.357 ================================ 00:27:52.357 Supported: No 00:27:52.357 00:27:52.357 Admin Command Set Attributes 00:27:52.357 ============================ 00:27:52.357 Security Send/Receive: Not Supported 00:27:52.357 Format NVM: Not Supported 00:27:52.357 Firmware Activate/Download: Not Supported 00:27:52.357 Namespace Management: Not Supported 00:27:52.357 Device Self-Test: Not Supported 00:27:52.357 Directives: Not Supported 00:27:52.357 NVMe-MI: Not Supported 00:27:52.357 Virtualization Management: Not Supported 00:27:52.357 Doorbell Buffer Config: Not Supported 00:27:52.357 Get LBA Status Capability: Not Supported 00:27:52.357 Command & Feature Lockdown Capability: Not Supported 00:27:52.357 Abort Command Limit: 4 00:27:52.357 Async Event Request Limit: 4 00:27:52.357 Number of Firmware Slots: N/A 00:27:52.357 Firmware Slot 1 Read-Only: N/A 00:27:52.357 Firmware Activation Without Reset: N/A 00:27:52.357 Multiple Update Detection Support: N/A 00:27:52.357 Firmware Update Granularity: No Information Provided 00:27:52.357 Per-Namespace SMART Log: No 00:27:52.357 Asymmetric Namespace Access Log Page: Not Supported 00:27:52.357 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:52.357 Command Effects Log Page: Supported 00:27:52.357 Get Log Page Extended Data: Supported 00:27:52.357 Telemetry Log Pages: Not Supported 00:27:52.357 Persistent Event Log Pages: Not Supported 00:27:52.357 Supported Log Pages Log Page: May Support 00:27:52.357 Commands Supported & Effects Log Page: Not Supported 00:27:52.357 Feature Identifiers & Effects Log Page:May Support 00:27:52.357 NVMe-MI Commands & Effects Log Page: May Support 00:27:52.357 Data Area 4 for Telemetry Log: Not Supported 00:27:52.357 Error Log Page Entries Supported: 128 00:27:52.357 Keep Alive: Supported 00:27:52.357 Keep Alive Granularity: 10000 ms 00:27:52.357 00:27:52.357 NVM Command Set Attributes 00:27:52.357 ========================== 00:27:52.357 Submission Queue Entry Size 00:27:52.357 Max: 64 00:27:52.357 Min: 64 00:27:52.357 Completion Queue Entry Size 00:27:52.357 Max: 16 00:27:52.357 Min: 16 00:27:52.357 Number of Namespaces: 32 00:27:52.357 Compare Command: Supported 00:27:52.357 Write Uncorrectable Command: Not Supported 00:27:52.357 Dataset Management Command: Supported 00:27:52.357 Write Zeroes Command: Supported 00:27:52.357 Set Features Save Field: Not Supported 00:27:52.357 Reservations: Supported 00:27:52.357 Timestamp: Not Supported 00:27:52.357 Copy: Supported 00:27:52.357 Volatile Write Cache: Present 00:27:52.357 Atomic Write Unit (Normal): 1 00:27:52.357 Atomic Write Unit (PFail): 1 00:27:52.357 Atomic Compare & Write Unit: 1 00:27:52.357 Fused Compare & Write: Supported 00:27:52.357 Scatter-Gather List 00:27:52.357 SGL Command Set: Supported 00:27:52.357 SGL Keyed: Supported 00:27:52.357 SGL Bit Bucket Descriptor: Not Supported 00:27:52.357 SGL Metadata Pointer: Not Supported 00:27:52.357 Oversized SGL: Not Supported 00:27:52.357 SGL Metadata Address: Not Supported 00:27:52.357 SGL Offset: Supported 00:27:52.357 Transport SGL Data Block: Not Supported 00:27:52.357 Replay Protected Memory Block: Not Supported 00:27:52.357 00:27:52.357 Firmware Slot Information 00:27:52.357 ========================= 00:27:52.357 Active slot: 1 00:27:52.357 Slot 1 Firmware Revision: 24.05 00:27:52.357 00:27:52.357 00:27:52.357 Commands Supported and Effects 00:27:52.357 ============================== 00:27:52.357 Admin Commands 00:27:52.357 -------------- 00:27:52.357 Get Log Page (02h): Supported 00:27:52.357 Identify (06h): Supported 00:27:52.357 Abort (08h): Supported 00:27:52.357 Set Features (09h): Supported 00:27:52.357 Get Features (0Ah): Supported 00:27:52.357 Asynchronous Event Request (0Ch): Supported 00:27:52.357 Keep Alive (18h): Supported 00:27:52.358 I/O Commands 00:27:52.358 ------------ 00:27:52.358 Flush (00h): Supported LBA-Change 00:27:52.358 Write (01h): Supported LBA-Change 00:27:52.358 Read (02h): Supported 00:27:52.358 Compare (05h): Supported 00:27:52.358 Write Zeroes (08h): Supported LBA-Change 00:27:52.358 Dataset Management (09h): Supported LBA-Change 00:27:52.358 Copy (19h): Supported LBA-Change 00:27:52.358 Unknown (79h): Supported LBA-Change 00:27:52.358 Unknown (7Ah): Supported 00:27:52.358 00:27:52.358 Error Log 00:27:52.358 ========= 00:27:52.358 00:27:52.358 Arbitration 00:27:52.358 =========== 00:27:52.358 Arbitration Burst: 1 00:27:52.358 00:27:52.358 Power Management 00:27:52.358 ================ 00:27:52.358 Number of Power States: 1 00:27:52.358 Current Power State: Power State #0 00:27:52.358 Power State #0: 00:27:52.358 Max Power: 0.00 W 00:27:52.358 Non-Operational State: Operational 00:27:52.358 Entry Latency: Not Reported 00:27:52.358 Exit Latency: Not Reported 00:27:52.358 Relative Read Throughput: 0 00:27:52.358 Relative Read Latency: 0 00:27:52.358 Relative Write Throughput: 0 00:27:52.358 Relative Write Latency: 0 00:27:52.358 Idle Power: Not Reported 00:27:52.358 Active Power: Not Reported 00:27:52.358 Non-Operational Permissive Mode: Not Supported 00:27:52.358 00:27:52.358 Health Information 00:27:52.358 ================== 00:27:52.358 Critical Warnings: 00:27:52.358 Available Spare Space: OK 00:27:52.358 Temperature: OK 00:27:52.358 Device Reliability: OK 00:27:52.358 Read Only: No 00:27:52.358 Volatile Memory Backup: OK 00:27:52.358 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:52.358 Temperature Threshold: [2024-04-18 11:15:20.828201] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828209] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11f42c0) 00:27:52.358 [2024-04-18 11:15:20.828218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.358 [2024-04-18 11:15:20.828247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123df30, cid 7, qid 0 00:27:52.358 [2024-04-18 11:15:20.828324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.358 [2024-04-18 11:15:20.828332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.358 [2024-04-18 11:15:20.828336] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828340] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123df30) on tqpair=0x11f42c0 00:27:52.358 [2024-04-18 11:15:20.828379] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:52.358 [2024-04-18 11:15:20.828393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.358 [2024-04-18 11:15:20.828401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.358 [2024-04-18 11:15:20.828408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.358 [2024-04-18 11:15:20.828415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.358 [2024-04-18 11:15:20.828424] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828429] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828433] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.358 [2024-04-18 11:15:20.828441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.358 [2024-04-18 11:15:20.828464] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.358 [2024-04-18 11:15:20.828521] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.358 [2024-04-18 11:15:20.828528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.358 [2024-04-18 11:15:20.828532] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828536] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.358 [2024-04-18 11:15:20.828546] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828551] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828555] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.358 [2024-04-18 11:15:20.828562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.358 [2024-04-18 11:15:20.828584] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.358 [2024-04-18 11:15:20.828660] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.358 [2024-04-18 11:15:20.828667] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.358 [2024-04-18 11:15:20.828671] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828675] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.358 [2024-04-18 11:15:20.828682] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:52.358 [2024-04-18 11:15:20.828687] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:52.358 [2024-04-18 11:15:20.828697] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828706] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.358 [2024-04-18 11:15:20.828714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.358 [2024-04-18 11:15:20.828732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.358 [2024-04-18 11:15:20.828784] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.358 [2024-04-18 11:15:20.828797] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.358 [2024-04-18 11:15:20.828801] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828806] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.358 [2024-04-18 11:15:20.828818] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828823] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828828] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.358 [2024-04-18 11:15:20.828835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.358 [2024-04-18 11:15:20.828853] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.358 [2024-04-18 11:15:20.828906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.358 [2024-04-18 11:15:20.828913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.358 [2024-04-18 11:15:20.828917] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828921] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.358 [2024-04-18 11:15:20.828933] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828938] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.828942] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.358 [2024-04-18 11:15:20.828950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.358 [2024-04-18 11:15:20.828967] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.358 [2024-04-18 11:15:20.829018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.358 [2024-04-18 11:15:20.829025] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.358 [2024-04-18 11:15:20.829029] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.829048] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.358 [2024-04-18 11:15:20.829061] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.829067] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.358 [2024-04-18 11:15:20.829071] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829099] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.829158] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.829165] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.829169] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829173] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.829185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829190] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829194] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829219] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.829275] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.829290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.829295] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829299] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.829312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829317] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829322] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829349] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.829400] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.829414] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.829418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829423] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.829435] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829440] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829444] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829471] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.829524] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.829531] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.829535] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829540] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.829551] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829556] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829560] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829585] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.829640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.829651] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.829656] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829660] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.829672] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829677] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829681] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829707] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.829765] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.829776] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.829780] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829785] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.829797] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829802] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829806] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829832] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.829886] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.829893] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.829897] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829901] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.829913] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829917] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.829921] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.829929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.829948] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.830000] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.830007] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.830011] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830016] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.830027] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830043] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830048] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.830056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.830075] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.830134] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.830141] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.830145] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830149] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.830161] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830166] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830170] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.830178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.830195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.830248] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.830255] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.830259] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830264] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.830275] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830280] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830284] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.830292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.830309] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.830364] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.830375] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.830380] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830384] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.830396] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830401] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830406] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.359 [2024-04-18 11:15:20.830413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.359 [2024-04-18 11:15:20.830432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.359 [2024-04-18 11:15:20.830481] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.359 [2024-04-18 11:15:20.830488] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.359 [2024-04-18 11:15:20.830492] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830497] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.359 [2024-04-18 11:15:20.830508] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830513] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.359 [2024-04-18 11:15:20.830517] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.830525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.830542] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.830597] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.830604] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.830608] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830612] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.830623] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830628] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830632] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.830640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.830657] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.830719] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.830736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.830741] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830746] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.830758] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830763] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830768] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.830776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.830794] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.830849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.830860] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.830865] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830870] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.830882] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830887] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830891] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.830899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.830917] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.830973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.830979] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.830983] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.830988] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.830999] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831004] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831008] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831044] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831112] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831116] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831121] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831132] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831137] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831168] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831234] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831246] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831250] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831255] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831267] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831272] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831288] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831315] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831386] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831390] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831403] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831408] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831412] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831438] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831490] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831497] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831501] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831505] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831522] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831526] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831551] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831603] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831614] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831630] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831639] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831664] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831720] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831727] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831735] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831746] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831751] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831755] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831780] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831836] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831847] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831851] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831862] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831867] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831871] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.360 [2024-04-18 11:15:20.831879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.360 [2024-04-18 11:15:20.831896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.360 [2024-04-18 11:15:20.831952] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.360 [2024-04-18 11:15:20.831959] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.360 [2024-04-18 11:15:20.831963] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831967] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.360 [2024-04-18 11:15:20.831979] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831983] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.360 [2024-04-18 11:15:20.831988] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.361 [2024-04-18 11:15:20.831995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.361 [2024-04-18 11:15:20.832012] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.361 [2024-04-18 11:15:20.836053] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.361 [2024-04-18 11:15:20.836074] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.361 [2024-04-18 11:15:20.836079] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.361 [2024-04-18 11:15:20.836084] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.361 [2024-04-18 11:15:20.836098] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:52.361 [2024-04-18 11:15:20.836104] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:52.361 [2024-04-18 11:15:20.836108] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11f42c0) 00:27:52.361 [2024-04-18 11:15:20.836117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.361 [2024-04-18 11:15:20.836141] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123d9b0, cid 3, qid 0 00:27:52.361 [2024-04-18 11:15:20.836202] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:52.361 [2024-04-18 11:15:20.836210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:52.361 [2024-04-18 11:15:20.836214] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:52.361 [2024-04-18 11:15:20.836218] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x123d9b0) on tqpair=0x11f42c0 00:27:52.361 [2024-04-18 11:15:20.836227] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:52.361 0 Kelvin (-273 Celsius) 00:27:52.361 Available Spare: 0% 00:27:52.361 Available Spare Threshold: 0% 00:27:52.361 Life Percentage Used: 0% 00:27:52.361 Data Units Read: 0 00:27:52.361 Data Units Written: 0 00:27:52.361 Host Read Commands: 0 00:27:52.361 Host Write Commands: 0 00:27:52.361 Controller Busy Time: 0 minutes 00:27:52.361 Power Cycles: 0 00:27:52.361 Power On Hours: 0 hours 00:27:52.361 Unsafe Shutdowns: 0 00:27:52.361 Unrecoverable Media Errors: 0 00:27:52.361 Lifetime Error Log Entries: 0 00:27:52.361 Warning Temperature Time: 0 minutes 00:27:52.361 Critical Temperature Time: 0 minutes 00:27:52.361 00:27:52.361 Number of Queues 00:27:52.361 ================ 00:27:52.361 Number of I/O Submission Queues: 127 00:27:52.361 Number of I/O Completion Queues: 127 00:27:52.361 00:27:52.361 Active Namespaces 00:27:52.361 ================= 00:27:52.361 Namespace ID:1 00:27:52.361 Error Recovery Timeout: Unlimited 00:27:52.361 Command Set Identifier: NVM (00h) 00:27:52.361 Deallocate: Supported 00:27:52.361 Deallocated/Unwritten Error: Not Supported 00:27:52.361 Deallocated Read Value: Unknown 00:27:52.361 Deallocate in Write Zeroes: Not Supported 00:27:52.361 Deallocated Guard Field: 0xFFFF 00:27:52.361 Flush: Supported 00:27:52.361 Reservation: Supported 00:27:52.361 Namespace Sharing Capabilities: Multiple Controllers 00:27:52.361 Size (in LBAs): 131072 (0GiB) 00:27:52.361 Capacity (in LBAs): 131072 (0GiB) 00:27:52.361 Utilization (in LBAs): 131072 (0GiB) 00:27:52.361 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:52.361 EUI64: ABCDEF0123456789 00:27:52.361 UUID: 270c0006-9892-4ad7-adfd-bb6d42ebcfea 00:27:52.361 Thin Provisioning: Not Supported 00:27:52.361 Per-NS Atomic Units: Yes 00:27:52.361 Atomic Boundary Size (Normal): 0 00:27:52.361 Atomic Boundary Size (PFail): 0 00:27:52.361 Atomic Boundary Offset: 0 00:27:52.361 Maximum Single Source Range Length: 65535 00:27:52.361 Maximum Copy Length: 65535 00:27:52.361 Maximum Source Range Count: 1 00:27:52.361 NGUID/EUI64 Never Reused: No 00:27:52.361 Namespace Write Protected: No 00:27:52.361 Number of LBA Formats: 1 00:27:52.361 Current LBA Format: LBA Format #00 00:27:52.361 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:52.361 00:27:52.361 11:15:20 -- host/identify.sh@51 -- # sync 00:27:52.361 11:15:20 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:52.361 11:15:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.361 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:27:52.361 11:15:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.361 11:15:20 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:52.361 11:15:20 -- host/identify.sh@56 -- # nvmftestfini 00:27:52.361 11:15:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:52.361 11:15:20 -- nvmf/common.sh@117 -- # sync 00:27:52.361 11:15:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.361 11:15:20 -- nvmf/common.sh@120 -- # set +e 00:27:52.361 11:15:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.361 11:15:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.361 rmmod nvme_tcp 00:27:52.361 rmmod nvme_fabrics 00:27:52.361 rmmod nvme_keyring 00:27:52.361 11:15:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.361 11:15:20 -- nvmf/common.sh@124 -- # set -e 00:27:52.361 11:15:20 -- nvmf/common.sh@125 -- # return 0 00:27:52.361 11:15:20 -- nvmf/common.sh@478 -- # '[' -n 98910 ']' 00:27:52.361 11:15:20 -- nvmf/common.sh@479 -- # killprocess 98910 00:27:52.361 11:15:20 -- common/autotest_common.sh@936 -- # '[' -z 98910 ']' 00:27:52.361 11:15:20 -- common/autotest_common.sh@940 -- # kill -0 98910 00:27:52.361 11:15:20 -- common/autotest_common.sh@941 -- # uname 00:27:52.361 11:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:52.361 11:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98910 00:27:52.361 11:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:52.361 11:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:52.361 killing process with pid 98910 00:27:52.361 11:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98910' 00:27:52.361 11:15:20 -- common/autotest_common.sh@955 -- # kill 98910 00:27:52.361 [2024-04-18 11:15:20.986396] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:52.361 11:15:20 -- common/autotest_common.sh@960 -- # wait 98910 00:27:52.621 11:15:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:52.621 11:15:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:52.621 11:15:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:52.621 11:15:21 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.621 11:15:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.621 11:15:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.621 11:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.621 11:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.881 11:15:21 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:52.881 00:27:52.881 real 0m2.602s 00:27:52.881 user 0m7.249s 00:27:52.881 sys 0m0.677s 00:27:52.881 11:15:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:52.881 ************************************ 00:27:52.881 END TEST nvmf_identify 00:27:52.881 ************************************ 00:27:52.881 11:15:21 -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 11:15:21 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:52.881 11:15:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:52.881 11:15:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:52.881 11:15:21 -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 ************************************ 00:27:52.881 START TEST nvmf_perf 00:27:52.881 ************************************ 00:27:52.881 11:15:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:52.881 * Looking for test storage... 00:27:52.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:52.881 11:15:21 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:52.881 11:15:21 -- nvmf/common.sh@7 -- # uname -s 00:27:52.881 11:15:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.881 11:15:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.881 11:15:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.881 11:15:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.881 11:15:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.881 11:15:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.881 11:15:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.881 11:15:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.881 11:15:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.881 11:15:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.881 11:15:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:52.881 11:15:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:27:52.881 11:15:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.881 11:15:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.881 11:15:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:52.881 11:15:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.881 11:15:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.881 11:15:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.881 11:15:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.881 11:15:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.881 11:15:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.881 11:15:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.881 11:15:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.881 11:15:21 -- paths/export.sh@5 -- # export PATH 00:27:52.882 11:15:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.882 11:15:21 -- nvmf/common.sh@47 -- # : 0 00:27:52.882 11:15:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.882 11:15:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.882 11:15:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.882 11:15:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.882 11:15:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.882 11:15:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.882 11:15:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.882 11:15:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.882 11:15:21 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:52.882 11:15:21 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:52.882 11:15:21 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:52.882 11:15:21 -- host/perf.sh@17 -- # nvmftestinit 00:27:52.882 11:15:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:52.882 11:15:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.882 11:15:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:52.882 11:15:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:52.882 11:15:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:52.882 11:15:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.882 11:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.882 11:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.882 11:15:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:52.882 11:15:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:52.882 11:15:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:52.882 11:15:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:52.882 11:15:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:52.882 11:15:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:52.882 11:15:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.882 11:15:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.882 11:15:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:52.882 11:15:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:52.882 11:15:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:52.882 11:15:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:52.882 11:15:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:52.882 11:15:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.882 11:15:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:52.882 11:15:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:52.882 11:15:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:52.882 11:15:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:52.882 11:15:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:53.142 11:15:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:53.142 Cannot find device "nvmf_tgt_br" 00:27:53.142 11:15:21 -- nvmf/common.sh@155 -- # true 00:27:53.142 11:15:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:53.142 Cannot find device "nvmf_tgt_br2" 00:27:53.142 11:15:21 -- nvmf/common.sh@156 -- # true 00:27:53.142 11:15:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:53.142 11:15:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:53.142 Cannot find device "nvmf_tgt_br" 00:27:53.142 11:15:21 -- nvmf/common.sh@158 -- # true 00:27:53.142 11:15:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:53.142 Cannot find device "nvmf_tgt_br2" 00:27:53.142 11:15:21 -- nvmf/common.sh@159 -- # true 00:27:53.142 11:15:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:53.142 11:15:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:53.142 11:15:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:53.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:53.142 11:15:21 -- nvmf/common.sh@162 -- # true 00:27:53.142 11:15:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:53.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:53.142 11:15:21 -- nvmf/common.sh@163 -- # true 00:27:53.142 11:15:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:53.142 11:15:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:53.142 11:15:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:53.142 11:15:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:53.142 11:15:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:53.142 11:15:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:53.142 11:15:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:53.142 11:15:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:53.142 11:15:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:53.142 11:15:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:53.142 11:15:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:53.142 11:15:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:53.142 11:15:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:53.142 11:15:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:53.143 11:15:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:53.143 11:15:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:53.143 11:15:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:53.143 11:15:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:53.143 11:15:21 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:53.401 11:15:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:53.401 11:15:21 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:53.401 11:15:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:53.401 11:15:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:53.401 11:15:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:53.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:27:53.401 00:27:53.401 --- 10.0.0.2 ping statistics --- 00:27:53.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.401 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:53.401 11:15:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:53.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:53.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:27:53.401 00:27:53.401 --- 10.0.0.3 ping statistics --- 00:27:53.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.401 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:53.401 11:15:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:53.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:27:53.401 00:27:53.401 --- 10.0.0.1 ping statistics --- 00:27:53.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.401 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:53.401 11:15:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.401 11:15:21 -- nvmf/common.sh@422 -- # return 0 00:27:53.401 11:15:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:53.401 11:15:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.401 11:15:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:53.401 11:15:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:53.401 11:15:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.401 11:15:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:53.401 11:15:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:53.401 11:15:21 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:53.401 11:15:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:53.401 11:15:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:53.401 11:15:21 -- common/autotest_common.sh@10 -- # set +x 00:27:53.401 11:15:21 -- nvmf/common.sh@470 -- # nvmfpid=99138 00:27:53.401 11:15:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:53.401 11:15:21 -- nvmf/common.sh@471 -- # waitforlisten 99138 00:27:53.401 11:15:21 -- common/autotest_common.sh@817 -- # '[' -z 99138 ']' 00:27:53.401 11:15:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.401 11:15:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:53.401 11:15:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.401 11:15:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:53.401 11:15:21 -- common/autotest_common.sh@10 -- # set +x 00:27:53.401 [2024-04-18 11:15:21.938922] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:53.401 [2024-04-18 11:15:21.939023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.660 [2024-04-18 11:15:22.082401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.660 [2024-04-18 11:15:22.181745] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.660 [2024-04-18 11:15:22.181817] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.660 [2024-04-18 11:15:22.181831] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.660 [2024-04-18 11:15:22.181843] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.660 [2024-04-18 11:15:22.181852] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.660 [2024-04-18 11:15:22.182104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.660 [2024-04-18 11:15:22.182221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.660 [2024-04-18 11:15:22.183161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.660 [2024-04-18 11:15:22.183172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.918 11:15:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:53.918 11:15:22 -- common/autotest_common.sh@850 -- # return 0 00:27:53.918 11:15:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:53.918 11:15:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:53.918 11:15:22 -- common/autotest_common.sh@10 -- # set +x 00:27:53.918 11:15:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.918 11:15:22 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:53.918 11:15:22 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:27:54.176 11:15:22 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:27:54.176 11:15:22 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:54.748 11:15:23 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:27:54.748 11:15:23 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:55.006 11:15:23 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:55.006 11:15:23 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:27:55.006 11:15:23 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:55.006 11:15:23 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:55.006 11:15:23 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:55.265 [2024-04-18 11:15:23.659938] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.265 11:15:23 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.522 11:15:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:55.522 11:15:23 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.780 11:15:24 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:55.780 11:15:24 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:56.038 11:15:24 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.296 [2024-04-18 11:15:24.781856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.296 11:15:24 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:56.554 11:15:25 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:56.554 11:15:25 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:27:56.554 11:15:25 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:56.554 11:15:25 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:27:57.929 Initializing NVMe Controllers 00:27:57.929 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:57.929 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:27:57.929 Initialization complete. Launching workers. 00:27:57.929 ======================================================== 00:27:57.929 Latency(us) 00:27:57.929 Device Information : IOPS MiB/s Average min max 00:27:57.929 PCIE (0000:00:10.0) NSID 1 from core 0: 23980.00 93.67 1334.51 307.19 8892.77 00:27:57.929 ======================================================== 00:27:57.929 Total : 23980.00 93.67 1334.51 307.19 8892.77 00:27:57.929 00:27:57.929 11:15:26 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.307 Initializing NVMe Controllers 00:27:59.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.307 Initialization complete. Launching workers. 00:27:59.307 ======================================================== 00:27:59.307 Latency(us) 00:27:59.307 Device Information : IOPS MiB/s Average min max 00:27:59.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3476.93 13.58 287.34 114.41 4237.31 00:27:59.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.75 0.48 8144.38 7088.80 11991.43 00:27:59.307 ======================================================== 00:27:59.307 Total : 3600.68 14.07 557.37 114.41 11991.43 00:27:59.307 00:27:59.307 11:15:27 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.242 [2024-04-18 11:15:28.832073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.242 [2024-04-18 11:15:28.832139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.242 [2024-04-18 11:15:28.832151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.242 [2024-04-18 11:15:28.832161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.243 [2024-04-18 11:15:28.832246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bd9c0 is same with the state(5) to be set 00:28:00.500 Initializing NVMe Controllers 00:28:00.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:00.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:00.501 Initialization complete. Launching workers. 00:28:00.501 ======================================================== 00:28:00.501 Latency(us) 00:28:00.501 Device Information : IOPS MiB/s Average min max 00:28:00.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8462.54 33.06 3782.10 931.99 10046.08 00:28:00.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2678.90 10.46 12074.86 4781.68 20150.83 00:28:00.501 ======================================================== 00:28:00.501 Total : 11141.44 43.52 5776.05 931.99 20150.83 00:28:00.501 00:28:00.501 11:15:28 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:28:00.501 11:15:28 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:03.032 Initializing NVMe Controllers 00:28:03.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.032 Controller IO queue size 128, less than required. 00:28:03.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.033 Controller IO queue size 128, less than required. 00:28:03.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:03.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:03.033 Initialization complete. Launching workers. 00:28:03.033 ======================================================== 00:28:03.033 Latency(us) 00:28:03.033 Device Information : IOPS MiB/s Average min max 00:28:03.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1453.24 363.31 90016.12 60192.04 165101.30 00:28:03.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 615.97 153.99 215660.35 113201.33 327368.54 00:28:03.033 ======================================================== 00:28:03.033 Total : 2069.21 517.30 127418.18 60192.04 327368.54 00:28:03.033 00:28:03.033 11:15:31 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:03.033 No valid NVMe controllers or AIO or URING devices found 00:28:03.033 Initializing NVMe Controllers 00:28:03.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.033 Controller IO queue size 128, less than required. 00:28:03.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.033 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:03.033 Controller IO queue size 128, less than required. 00:28:03.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.033 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:28:03.033 WARNING: Some requested NVMe devices were skipped 00:28:03.033 11:15:31 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:05.561 Initializing NVMe Controllers 00:28:05.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.561 Controller IO queue size 128, less than required. 00:28:05.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.561 Controller IO queue size 128, less than required. 00:28:05.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:05.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:05.561 Initialization complete. Launching workers. 00:28:05.561 00:28:05.561 ==================== 00:28:05.561 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:05.561 TCP transport: 00:28:05.561 polls: 9451 00:28:05.561 idle_polls: 6653 00:28:05.561 sock_completions: 2798 00:28:05.561 nvme_completions: 5535 00:28:05.561 submitted_requests: 8292 00:28:05.561 queued_requests: 1 00:28:05.561 00:28:05.561 ==================== 00:28:05.561 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:05.561 TCP transport: 00:28:05.561 polls: 9782 00:28:05.561 idle_polls: 7018 00:28:05.561 sock_completions: 2764 00:28:05.561 nvme_completions: 5685 00:28:05.561 submitted_requests: 8562 00:28:05.561 queued_requests: 1 00:28:05.561 ======================================================== 00:28:05.561 Latency(us) 00:28:05.561 Device Information : IOPS MiB/s Average min max 00:28:05.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1380.46 345.12 94629.89 68614.56 157297.75 00:28:05.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1417.88 354.47 91736.40 48700.91 132284.90 00:28:05.561 ======================================================== 00:28:05.561 Total : 2798.34 699.59 93163.80 48700.91 157297.75 00:28:05.561 00:28:05.561 11:15:34 -- host/perf.sh@66 -- # sync 00:28:05.819 11:15:34 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.077 11:15:34 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:06.077 11:15:34 -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:28:06.077 11:15:34 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:06.335 11:15:34 -- host/perf.sh@72 -- # ls_guid=2268ecc9-bdb1-4c52-883e-c635e59282ef 00:28:06.335 11:15:34 -- host/perf.sh@73 -- # get_lvs_free_mb 2268ecc9-bdb1-4c52-883e-c635e59282ef 00:28:06.335 11:15:34 -- common/autotest_common.sh@1350 -- # local lvs_uuid=2268ecc9-bdb1-4c52-883e-c635e59282ef 00:28:06.335 11:15:34 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:06.335 11:15:34 -- common/autotest_common.sh@1352 -- # local fc 00:28:06.335 11:15:34 -- common/autotest_common.sh@1353 -- # local cs 00:28:06.335 11:15:34 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:06.592 11:15:35 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:06.592 { 00:28:06.592 "base_bdev": "Nvme0n1", 00:28:06.592 "block_size": 4096, 00:28:06.592 "cluster_size": 4194304, 00:28:06.592 "free_clusters": 1278, 00:28:06.592 "name": "lvs_0", 00:28:06.592 "total_data_clusters": 1278, 00:28:06.592 "uuid": "2268ecc9-bdb1-4c52-883e-c635e59282ef" 00:28:06.592 } 00:28:06.592 ]' 00:28:06.593 11:15:35 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="2268ecc9-bdb1-4c52-883e-c635e59282ef") .free_clusters' 00:28:06.593 11:15:35 -- common/autotest_common.sh@1355 -- # fc=1278 00:28:06.593 11:15:35 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="2268ecc9-bdb1-4c52-883e-c635e59282ef") .cluster_size' 00:28:06.593 11:15:35 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:06.593 11:15:35 -- common/autotest_common.sh@1359 -- # free_mb=5112 00:28:06.593 5112 00:28:06.593 11:15:35 -- common/autotest_common.sh@1360 -- # echo 5112 00:28:06.593 11:15:35 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:28:06.593 11:15:35 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2268ecc9-bdb1-4c52-883e-c635e59282ef lbd_0 5112 00:28:06.850 11:15:35 -- host/perf.sh@80 -- # lb_guid=f3dfd05e-0ce0-4cda-8278-2c19bf9ee9ed 00:28:06.850 11:15:35 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f3dfd05e-0ce0-4cda-8278-2c19bf9ee9ed lvs_n_0 00:28:07.415 11:15:35 -- host/perf.sh@83 -- # ls_nested_guid=9c4ba12b-7a98-455e-8f18-ea6c677a0ecd 00:28:07.415 11:15:35 -- host/perf.sh@84 -- # get_lvs_free_mb 9c4ba12b-7a98-455e-8f18-ea6c677a0ecd 00:28:07.415 11:15:35 -- common/autotest_common.sh@1350 -- # local lvs_uuid=9c4ba12b-7a98-455e-8f18-ea6c677a0ecd 00:28:07.415 11:15:35 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:07.415 11:15:35 -- common/autotest_common.sh@1352 -- # local fc 00:28:07.415 11:15:35 -- common/autotest_common.sh@1353 -- # local cs 00:28:07.415 11:15:35 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:07.673 11:15:36 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:07.673 { 00:28:07.673 "base_bdev": "Nvme0n1", 00:28:07.673 "block_size": 4096, 00:28:07.673 "cluster_size": 4194304, 00:28:07.673 "free_clusters": 0, 00:28:07.673 "name": "lvs_0", 00:28:07.673 "total_data_clusters": 1278, 00:28:07.673 "uuid": "2268ecc9-bdb1-4c52-883e-c635e59282ef" 00:28:07.673 }, 00:28:07.673 { 00:28:07.673 "base_bdev": "f3dfd05e-0ce0-4cda-8278-2c19bf9ee9ed", 00:28:07.673 "block_size": 4096, 00:28:07.673 "cluster_size": 4194304, 00:28:07.673 "free_clusters": 1276, 00:28:07.673 "name": "lvs_n_0", 00:28:07.673 "total_data_clusters": 1276, 00:28:07.673 "uuid": "9c4ba12b-7a98-455e-8f18-ea6c677a0ecd" 00:28:07.673 } 00:28:07.673 ]' 00:28:07.673 11:15:36 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="9c4ba12b-7a98-455e-8f18-ea6c677a0ecd") .free_clusters' 00:28:07.673 11:15:36 -- common/autotest_common.sh@1355 -- # fc=1276 00:28:07.673 11:15:36 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="9c4ba12b-7a98-455e-8f18-ea6c677a0ecd") .cluster_size' 00:28:07.673 11:15:36 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:07.673 5104 00:28:07.673 11:15:36 -- common/autotest_common.sh@1359 -- # free_mb=5104 00:28:07.673 11:15:36 -- common/autotest_common.sh@1360 -- # echo 5104 00:28:07.673 11:15:36 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:28:07.673 11:15:36 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9c4ba12b-7a98-455e-8f18-ea6c677a0ecd lbd_nest_0 5104 00:28:07.930 11:15:36 -- host/perf.sh@88 -- # lb_nested_guid=31ee1e78-75d8-42db-bae3-a0cabb04a5c1 00:28:07.930 11:15:36 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.509 11:15:36 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:08.509 11:15:36 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 31ee1e78-75d8-42db-bae3-a0cabb04a5c1 00:28:08.823 11:15:37 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.082 11:15:37 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:09.082 11:15:37 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:09.082 11:15:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:09.082 11:15:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:09.082 11:15:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:09.340 No valid NVMe controllers or AIO or URING devices found 00:28:09.340 Initializing NVMe Controllers 00:28:09.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.340 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:09.340 WARNING: Some requested NVMe devices were skipped 00:28:09.340 11:15:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:09.340 11:15:37 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.607 Initializing NVMe Controllers 00:28:21.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.607 Initialization complete. Launching workers. 00:28:21.607 ======================================================== 00:28:21.607 Latency(us) 00:28:21.607 Device Information : IOPS MiB/s Average min max 00:28:21.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 874.23 109.28 1143.52 362.41 8580.03 00:28:21.607 ======================================================== 00:28:21.607 Total : 874.23 109.28 1143.52 362.41 8580.03 00:28:21.607 00:28:21.607 11:15:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:21.607 11:15:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:21.607 11:15:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.607 No valid NVMe controllers or AIO or URING devices found 00:28:21.607 Initializing NVMe Controllers 00:28:21.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.607 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:21.607 WARNING: Some requested NVMe devices were skipped 00:28:21.607 11:15:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:21.607 11:15:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.572 [2024-04-18 11:15:58.650073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3b90 is same with the state(5) to be set 00:28:31.572 [2024-04-18 11:15:58.650149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3b90 is same with the state(5) to be set 00:28:31.572 [2024-04-18 11:15:58.650179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3b90 is same with the state(5) to be set 00:28:31.572 [2024-04-18 11:15:58.650193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3b90 is same with the state(5) to be set 00:28:31.572 [2024-04-18 11:15:58.650207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3b90 is same with the state(5) to be set 00:28:31.572 [2024-04-18 11:15:58.650219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3b90 is same with the state(5) to be set 00:28:31.572 [2024-04-18 11:15:58.650232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3b90 is same with the state(5) to be set 00:28:31.572 Initializing NVMe Controllers 00:28:31.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:31.572 Initialization complete. Launching workers. 00:28:31.572 ======================================================== 00:28:31.572 Latency(us) 00:28:31.572 Device Information : IOPS MiB/s Average min max 00:28:31.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1130.39 141.30 28322.72 8029.95 247549.34 00:28:31.572 ======================================================== 00:28:31.572 Total : 1130.39 141.30 28322.72 8029.95 247549.34 00:28:31.572 00:28:31.572 11:15:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:31.572 11:15:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:31.572 11:15:58 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.572 No valid NVMe controllers or AIO or URING devices found 00:28:31.572 Initializing NVMe Controllers 00:28:31.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.572 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:31.572 WARNING: Some requested NVMe devices were skipped 00:28:31.572 11:15:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:31.572 11:15:58 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.543 Initializing NVMe Controllers 00:28:41.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.543 Controller IO queue size 128, less than required. 00:28:41.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:41.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.543 Initialization complete. Launching workers. 00:28:41.543 ======================================================== 00:28:41.543 Latency(us) 00:28:41.543 Device Information : IOPS MiB/s Average min max 00:28:41.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3781.65 472.71 33915.40 12151.70 101428.90 00:28:41.543 ======================================================== 00:28:41.543 Total : 3781.65 472.71 33915.40 12151.70 101428.90 00:28:41.543 00:28:41.543 11:16:09 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.543 11:16:09 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 31ee1e78-75d8-42db-bae3-a0cabb04a5c1 00:28:41.543 11:16:10 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:41.800 11:16:10 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f3dfd05e-0ce0-4cda-8278-2c19bf9ee9ed 00:28:42.058 11:16:10 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:42.316 11:16:10 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:42.316 11:16:10 -- host/perf.sh@114 -- # nvmftestfini 00:28:42.316 11:16:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:42.316 11:16:10 -- nvmf/common.sh@117 -- # sync 00:28:42.316 11:16:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:42.316 11:16:10 -- nvmf/common.sh@120 -- # set +e 00:28:42.316 11:16:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:42.316 11:16:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:42.316 rmmod nvme_tcp 00:28:42.316 rmmod nvme_fabrics 00:28:42.316 rmmod nvme_keyring 00:28:42.316 11:16:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:42.316 11:16:10 -- nvmf/common.sh@124 -- # set -e 00:28:42.316 11:16:10 -- nvmf/common.sh@125 -- # return 0 00:28:42.316 11:16:10 -- nvmf/common.sh@478 -- # '[' -n 99138 ']' 00:28:42.316 11:16:10 -- nvmf/common.sh@479 -- # killprocess 99138 00:28:42.316 11:16:10 -- common/autotest_common.sh@936 -- # '[' -z 99138 ']' 00:28:42.316 11:16:10 -- common/autotest_common.sh@940 -- # kill -0 99138 00:28:42.316 11:16:10 -- common/autotest_common.sh@941 -- # uname 00:28:42.316 11:16:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:42.316 11:16:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99138 00:28:42.316 killing process with pid 99138 00:28:42.316 11:16:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:42.316 11:16:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:42.316 11:16:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99138' 00:28:42.316 11:16:10 -- common/autotest_common.sh@955 -- # kill 99138 00:28:42.316 11:16:10 -- common/autotest_common.sh@960 -- # wait 99138 00:28:44.215 11:16:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:44.215 11:16:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:44.215 11:16:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:44.215 11:16:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.215 11:16:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.215 11:16:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.215 11:16:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.215 11:16:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.215 11:16:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:44.215 ************************************ 00:28:44.215 END TEST nvmf_perf 00:28:44.215 ************************************ 00:28:44.215 00:28:44.215 real 0m51.352s 00:28:44.215 user 3m14.270s 00:28:44.215 sys 0m10.973s 00:28:44.215 11:16:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:44.215 11:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:44.215 11:16:12 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:44.215 11:16:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:44.215 11:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:44.215 11:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 ************************************ 00:28:44.474 START TEST nvmf_fio_host 00:28:44.474 ************************************ 00:28:44.474 11:16:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:44.474 * Looking for test storage... 00:28:44.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:44.474 11:16:12 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:44.474 11:16:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.474 11:16:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.474 11:16:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.474 11:16:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- paths/export.sh@5 -- # export PATH 00:28:44.474 11:16:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:44.474 11:16:12 -- nvmf/common.sh@7 -- # uname -s 00:28:44.474 11:16:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.474 11:16:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.474 11:16:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.474 11:16:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.474 11:16:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.474 11:16:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.474 11:16:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.474 11:16:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.474 11:16:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.474 11:16:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.474 11:16:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:28:44.474 11:16:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:28:44.474 11:16:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.474 11:16:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.474 11:16:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:44.474 11:16:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.474 11:16:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:44.474 11:16:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.474 11:16:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.474 11:16:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.474 11:16:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- paths/export.sh@5 -- # export PATH 00:28:44.474 11:16:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.474 11:16:12 -- nvmf/common.sh@47 -- # : 0 00:28:44.474 11:16:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.474 11:16:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.474 11:16:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.474 11:16:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.474 11:16:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.474 11:16:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.474 11:16:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.474 11:16:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.474 11:16:12 -- host/fio.sh@12 -- # nvmftestinit 00:28:44.474 11:16:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:44.474 11:16:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.474 11:16:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:44.474 11:16:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:44.474 11:16:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:44.474 11:16:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.474 11:16:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.474 11:16:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.474 11:16:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:44.474 11:16:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:44.474 11:16:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:44.474 11:16:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:44.474 11:16:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:44.474 11:16:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:44.474 11:16:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.474 11:16:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.474 11:16:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:44.474 11:16:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:44.474 11:16:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:44.474 11:16:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:44.474 11:16:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:44.474 11:16:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.474 11:16:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:44.474 11:16:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:44.474 11:16:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:44.474 11:16:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:44.474 11:16:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:44.474 11:16:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:44.474 Cannot find device "nvmf_tgt_br" 00:28:44.474 11:16:12 -- nvmf/common.sh@155 -- # true 00:28:44.474 11:16:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:44.474 Cannot find device "nvmf_tgt_br2" 00:28:44.474 11:16:13 -- nvmf/common.sh@156 -- # true 00:28:44.474 11:16:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:44.474 11:16:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:44.474 Cannot find device "nvmf_tgt_br" 00:28:44.475 11:16:13 -- nvmf/common.sh@158 -- # true 00:28:44.475 11:16:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:44.475 Cannot find device "nvmf_tgt_br2" 00:28:44.475 11:16:13 -- nvmf/common.sh@159 -- # true 00:28:44.475 11:16:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:44.475 11:16:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:44.475 11:16:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:44.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:44.475 11:16:13 -- nvmf/common.sh@162 -- # true 00:28:44.475 11:16:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:44.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:44.475 11:16:13 -- nvmf/common.sh@163 -- # true 00:28:44.475 11:16:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:44.475 11:16:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:44.475 11:16:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:44.475 11:16:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:44.475 11:16:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:44.732 11:16:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:44.732 11:16:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:44.732 11:16:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:44.732 11:16:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:44.732 11:16:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:44.732 11:16:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:44.732 11:16:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:44.732 11:16:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:44.732 11:16:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:44.732 11:16:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:44.732 11:16:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:44.732 11:16:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:44.732 11:16:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:44.732 11:16:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:44.732 11:16:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:44.732 11:16:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:44.732 11:16:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:44.732 11:16:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:44.732 11:16:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:44.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:28:44.732 00:28:44.732 --- 10.0.0.2 ping statistics --- 00:28:44.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.732 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:44.732 11:16:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:44.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:44.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:28:44.732 00:28:44.732 --- 10.0.0.3 ping statistics --- 00:28:44.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.732 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:44.732 11:16:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:44.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:28:44.732 00:28:44.732 --- 10.0.0.1 ping statistics --- 00:28:44.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.732 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:28:44.732 11:16:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.732 11:16:13 -- nvmf/common.sh@422 -- # return 0 00:28:44.732 11:16:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:44.732 11:16:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.732 11:16:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:44.732 11:16:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:44.732 11:16:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.732 11:16:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:44.732 11:16:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:44.732 11:16:13 -- host/fio.sh@14 -- # [[ y != y ]] 00:28:44.732 11:16:13 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:28:44.732 11:16:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:44.732 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:44.732 11:16:13 -- host/fio.sh@22 -- # nvmfpid=100089 00:28:44.732 11:16:13 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:44.732 11:16:13 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:44.732 11:16:13 -- host/fio.sh@26 -- # waitforlisten 100089 00:28:44.732 11:16:13 -- common/autotest_common.sh@817 -- # '[' -z 100089 ']' 00:28:44.732 11:16:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.732 11:16:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:44.732 11:16:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.732 11:16:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:44.732 11:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:44.732 [2024-04-18 11:16:13.347838] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:44.732 [2024-04-18 11:16:13.347945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.989 [2024-04-18 11:16:13.487263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.989 [2024-04-18 11:16:13.619305] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.989 [2024-04-18 11:16:13.619392] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.989 [2024-04-18 11:16:13.619408] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.989 [2024-04-18 11:16:13.619419] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.989 [2024-04-18 11:16:13.619431] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.989 [2024-04-18 11:16:13.619633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.989 [2024-04-18 11:16:13.619758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.989 [2024-04-18 11:16:13.620689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.989 [2024-04-18 11:16:13.620742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.923 11:16:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:45.923 11:16:14 -- common/autotest_common.sh@850 -- # return 0 00:28:45.923 11:16:14 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.923 11:16:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.924 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 [2024-04-18 11:16:14.323334] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.924 11:16:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.924 11:16:14 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:28:45.924 11:16:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:45.924 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 11:16:14 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:45.924 11:16:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.924 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 Malloc1 00:28:45.924 11:16:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.924 11:16:14 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.924 11:16:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.924 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 11:16:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.924 11:16:14 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:45.924 11:16:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.924 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 11:16:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.924 11:16:14 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.924 11:16:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.924 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 [2024-04-18 11:16:14.446738] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.924 11:16:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.924 11:16:14 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:45.924 11:16:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:45.924 11:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 11:16:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:45.924 11:16:14 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:28:45.924 11:16:14 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:45.924 11:16:14 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:45.924 11:16:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:45.924 11:16:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:45.924 11:16:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:45.924 11:16:14 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:45.924 11:16:14 -- common/autotest_common.sh@1327 -- # shift 00:28:45.924 11:16:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:45.924 11:16:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:45.924 11:16:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:45.924 11:16:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:45.924 11:16:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:45.924 11:16:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:45.924 11:16:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:45.924 11:16:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:46.182 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:46.182 fio-3.35 00:28:46.182 Starting 1 thread 00:28:48.713 00:28:48.713 test: (groupid=0, jobs=1): err= 0: pid=100168: Thu Apr 18 11:16:16 2024 00:28:48.713 read: IOPS=8165, BW=31.9MiB/s (33.4MB/s)(64.0MiB/2007msec) 00:28:48.713 slat (usec): min=2, max=330, avg= 2.53, stdev= 3.40 00:28:48.713 clat (usec): min=3311, max=14189, avg=8196.89, stdev=580.39 00:28:48.713 lat (usec): min=3343, max=14191, avg=8199.42, stdev=580.08 00:28:48.713 clat percentiles (usec): 00:28:48.713 | 1.00th=[ 6980], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7767], 00:28:48.713 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8160], 60.00th=[ 8291], 00:28:48.713 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:28:48.713 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[12518], 99.95th=[13304], 00:28:48.713 | 99.99th=[14091] 00:28:48.713 bw ( KiB/s): min=31624, max=33112, per=99.90%, avg=32632.00, stdev=697.52, samples=4 00:28:48.713 iops : min= 7908, max= 8278, avg=8158.50, stdev=173.42, samples=4 00:28:48.713 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(64.0MiB/2007msec); 0 zone resets 00:28:48.713 slat (usec): min=2, max=262, avg= 2.64, stdev= 2.29 00:28:48.713 clat (usec): min=2357, max=13923, avg=7418.91, stdev=515.32 00:28:48.713 lat (usec): min=2371, max=13926, avg=7421.55, stdev=515.08 00:28:48.713 clat percentiles (usec): 00:28:48.713 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 7046], 00:28:48.714 | 30.00th=[ 7242], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:28:48.714 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:28:48.714 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[11469], 99.95th=[12649], 00:28:48.714 | 99.99th=[13566] 00:28:48.714 bw ( KiB/s): min=32320, max=33392, per=100.00%, avg=32642.00, stdev=502.85, samples=4 00:28:48.714 iops : min= 8080, max= 8348, avg=8160.50, stdev=125.71, samples=4 00:28:48.714 lat (msec) : 4=0.15%, 10=99.52%, 20=0.33% 00:28:48.714 cpu : usr=66.47%, sys=24.81%, ctx=62, majf=0, minf=6 00:28:48.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:48.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:48.714 issued rwts: total=16389,16376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:48.714 00:28:48.714 Run status group 0 (all jobs): 00:28:48.714 READ: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=64.0MiB (67.1MB), run=2007-2007msec 00:28:48.714 WRITE: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=64.0MiB (67.1MB), run=2007-2007msec 00:28:48.714 11:16:16 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:48.714 11:16:16 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:48.714 11:16:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:48.714 11:16:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:48.714 11:16:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:48.714 11:16:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:48.714 11:16:16 -- common/autotest_common.sh@1327 -- # shift 00:28:48.714 11:16:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:48.714 11:16:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:48.714 11:16:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:48.714 11:16:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:48.714 11:16:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:48.714 11:16:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:48.714 11:16:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:48.714 11:16:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:48.714 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:48.714 fio-3.35 00:28:48.714 Starting 1 thread 00:28:51.242 00:28:51.242 test: (groupid=0, jobs=1): err= 0: pid=100212: Thu Apr 18 11:16:19 2024 00:28:51.242 read: IOPS=6958, BW=109MiB/s (114MB/s)(218MiB/2008msec) 00:28:51.242 slat (usec): min=3, max=137, avg= 4.29, stdev= 2.24 00:28:51.242 clat (usec): min=1519, max=25789, avg=11029.38, stdev=3142.97 00:28:51.242 lat (usec): min=1522, max=25792, avg=11033.66, stdev=3143.30 00:28:51.242 clat percentiles (usec): 00:28:51.242 | 1.00th=[ 5211], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 8455], 00:28:51.242 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10814], 60.00th=[11600], 00:28:51.242 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14877], 95.00th=[16712], 00:28:51.242 | 99.00th=[21365], 99.50th=[23725], 99.90th=[25035], 99.95th=[25560], 00:28:51.242 | 99.99th=[25822] 00:28:51.242 bw ( KiB/s): min=50752, max=63520, per=50.66%, avg=56400.00, stdev=5281.36, samples=4 00:28:51.242 iops : min= 3172, max= 3970, avg=3525.00, stdev=330.08, samples=4 00:28:51.242 write: IOPS=4016, BW=62.8MiB/s (65.8MB/s)(115MiB/1836msec); 0 zone resets 00:28:51.242 slat (usec): min=36, max=339, avg=43.77, stdev=10.87 00:28:51.242 clat (usec): min=6245, max=26045, avg=13039.72, stdev=2727.93 00:28:51.242 lat (usec): min=6282, max=26168, avg=13083.49, stdev=2731.74 00:28:51.242 clat percentiles (usec): 00:28:51.242 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10683], 00:28:51.242 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12649], 60.00th=[13304], 00:28:51.242 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16712], 95.00th=[17957], 00:28:51.242 | 99.00th=[21103], 99.50th=[22152], 99.90th=[23200], 99.95th=[23725], 00:28:51.242 | 99.99th=[26084] 00:28:51.242 bw ( KiB/s): min=52480, max=66107, per=90.95%, avg=58446.75, stdev=5665.90, samples=4 00:28:51.242 iops : min= 3280, max= 4131, avg=3652.75, stdev=353.81, samples=4 00:28:51.242 lat (msec) : 2=0.05%, 4=0.15%, 10=29.09%, 20=68.96%, 50=1.75% 00:28:51.242 cpu : usr=67.03%, sys=20.87%, ctx=44, majf=0, minf=2 00:28:51.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:28:51.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:51.242 issued rwts: total=13973,7374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:51.242 00:28:51.242 Run status group 0 (all jobs): 00:28:51.242 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=218MiB (229MB), run=2008-2008msec 00:28:51.242 WRITE: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=115MiB (121MB), run=1836-1836msec 00:28:51.242 11:16:19 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:28:51.242 11:16:19 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:28:51.242 11:16:19 -- host/fio.sh@49 -- # get_nvme_bdfs 00:28:51.242 11:16:19 -- common/autotest_common.sh@1499 -- # bdfs=() 00:28:51.242 11:16:19 -- common/autotest_common.sh@1499 -- # local bdfs 00:28:51.242 11:16:19 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:51.242 11:16:19 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:51.242 11:16:19 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:28:51.242 11:16:19 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:28:51.242 11:16:19 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:51.242 11:16:19 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 Nvme0n1 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- host/fio.sh@51 -- # ls_guid=3d9367d5-c83b-4ddc-a4aa-c0f9af7fd05d 00:28:51.242 11:16:19 -- host/fio.sh@52 -- # get_lvs_free_mb 3d9367d5-c83b-4ddc-a4aa-c0f9af7fd05d 00:28:51.242 11:16:19 -- common/autotest_common.sh@1350 -- # local lvs_uuid=3d9367d5-c83b-4ddc-a4aa-c0f9af7fd05d 00:28:51.242 11:16:19 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:51.242 11:16:19 -- common/autotest_common.sh@1352 -- # local fc 00:28:51.242 11:16:19 -- common/autotest_common.sh@1353 -- # local cs 00:28:51.242 11:16:19 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:51.242 { 00:28:51.242 "base_bdev": "Nvme0n1", 00:28:51.242 "block_size": 4096, 00:28:51.242 "cluster_size": 1073741824, 00:28:51.242 "free_clusters": 4, 00:28:51.242 "name": "lvs_0", 00:28:51.242 "total_data_clusters": 4, 00:28:51.242 "uuid": "3d9367d5-c83b-4ddc-a4aa-c0f9af7fd05d" 00:28:51.242 } 00:28:51.242 ]' 00:28:51.242 11:16:19 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="3d9367d5-c83b-4ddc-a4aa-c0f9af7fd05d") .free_clusters' 00:28:51.242 11:16:19 -- common/autotest_common.sh@1355 -- # fc=4 00:28:51.242 11:16:19 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="3d9367d5-c83b-4ddc-a4aa-c0f9af7fd05d") .cluster_size' 00:28:51.242 11:16:19 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:28:51.242 11:16:19 -- common/autotest_common.sh@1359 -- # free_mb=4096 00:28:51.242 4096 00:28:51.242 11:16:19 -- common/autotest_common.sh@1360 -- # echo 4096 00:28:51.242 11:16:19 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 2186f994-773f-4326-81d8-88a3412cee62 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:51.242 11:16:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.242 11:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:51.242 11:16:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.242 11:16:19 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:51.242 11:16:19 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:51.242 11:16:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:51.242 11:16:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:51.242 11:16:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:51.242 11:16:19 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:51.242 11:16:19 -- common/autotest_common.sh@1327 -- # shift 00:28:51.242 11:16:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:51.242 11:16:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:51.242 11:16:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:51.242 11:16:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:51.242 11:16:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:51.242 11:16:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:51.242 11:16:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:51.242 11:16:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:51.242 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:51.242 fio-3.35 00:28:51.242 Starting 1 thread 00:28:53.784 00:28:53.784 test: (groupid=0, jobs=1): err= 0: pid=100295: Thu Apr 18 11:16:22 2024 00:28:53.784 read: IOPS=5799, BW=22.7MiB/s (23.8MB/s)(45.5MiB/2009msec) 00:28:53.784 slat (usec): min=2, max=343, avg= 2.67, stdev= 4.07 00:28:53.784 clat (usec): min=5231, max=25514, avg=11579.20, stdev=1533.06 00:28:53.784 lat (usec): min=5240, max=25516, avg=11581.88, stdev=1532.91 00:28:53.784 clat percentiles (usec): 00:28:53.784 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:28:53.784 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:28:53.784 | 70.00th=[12387], 80.00th=[13042], 90.00th=[13698], 95.00th=[14091], 00:28:53.784 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16712], 99.95th=[17695], 00:28:53.784 | 99.99th=[19530] 00:28:53.784 bw ( KiB/s): min=22104, max=25888, per=99.89%, avg=23174.00, stdev=1815.09, samples=4 00:28:53.784 iops : min= 5526, max= 6472, avg=5793.50, stdev=453.77, samples=4 00:28:53.784 write: IOPS=5786, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2009msec); 0 zone resets 00:28:53.784 slat (usec): min=2, max=243, avg= 2.82, stdev= 2.56 00:28:53.784 clat (usec): min=2425, max=23324, avg=10412.76, stdev=1516.56 00:28:53.784 lat (usec): min=2438, max=23326, avg=10415.59, stdev=1516.49 00:28:53.784 clat percentiles (usec): 00:28:53.784 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:28:53.784 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:28:53.784 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12387], 95.00th=[12780], 00:28:53.784 | 99.00th=[13829], 99.50th=[15139], 99.90th=[20579], 99.95th=[21890], 00:28:53.784 | 99.99th=[23200] 00:28:53.784 bw ( KiB/s): min=21904, max=25520, per=99.91%, avg=23126.00, stdev=1679.33, samples=4 00:28:53.784 iops : min= 5476, max= 6380, avg=5781.50, stdev=419.83, samples=4 00:28:53.784 lat (msec) : 4=0.03%, 10=30.55%, 20=69.32%, 50=0.09% 00:28:53.784 cpu : usr=70.37%, sys=23.26%, ctx=4, majf=0, minf=6 00:28:53.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:53.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:53.784 issued rwts: total=11652,11625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:53.784 00:28:53.784 Run status group 0 (all jobs): 00:28:53.784 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.5MiB (47.7MB), run=2009-2009msec 00:28:53.784 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2009-2009msec 00:28:53.784 11:16:22 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:53.784 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.785 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:53.785 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.785 11:16:22 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:53.785 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.785 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:53.785 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.785 11:16:22 -- host/fio.sh@62 -- # ls_nested_guid=42005f69-0d89-4fa3-abbe-d7379ecd7116 00:28:53.785 11:16:22 -- host/fio.sh@63 -- # get_lvs_free_mb 42005f69-0d89-4fa3-abbe-d7379ecd7116 00:28:53.785 11:16:22 -- common/autotest_common.sh@1350 -- # local lvs_uuid=42005f69-0d89-4fa3-abbe-d7379ecd7116 00:28:53.785 11:16:22 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:53.785 11:16:22 -- common/autotest_common.sh@1352 -- # local fc 00:28:53.785 11:16:22 -- common/autotest_common.sh@1353 -- # local cs 00:28:53.785 11:16:22 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:53.785 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.785 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:53.785 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.785 11:16:22 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:53.785 { 00:28:53.785 "base_bdev": "Nvme0n1", 00:28:53.785 "block_size": 4096, 00:28:53.785 "cluster_size": 1073741824, 00:28:53.785 "free_clusters": 0, 00:28:53.785 "name": "lvs_0", 00:28:53.785 "total_data_clusters": 4, 00:28:53.785 "uuid": "3d9367d5-c83b-4ddc-a4aa-c0f9af7fd05d" 00:28:53.785 }, 00:28:53.785 { 00:28:53.785 "base_bdev": "2186f994-773f-4326-81d8-88a3412cee62", 00:28:53.785 "block_size": 4096, 00:28:53.785 "cluster_size": 4194304, 00:28:53.785 "free_clusters": 1022, 00:28:53.785 "name": "lvs_n_0", 00:28:53.785 "total_data_clusters": 1022, 00:28:53.785 "uuid": "42005f69-0d89-4fa3-abbe-d7379ecd7116" 00:28:53.785 } 00:28:53.785 ]' 00:28:53.785 11:16:22 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="42005f69-0d89-4fa3-abbe-d7379ecd7116") .free_clusters' 00:28:53.785 11:16:22 -- common/autotest_common.sh@1355 -- # fc=1022 00:28:53.785 11:16:22 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="42005f69-0d89-4fa3-abbe-d7379ecd7116") .cluster_size' 00:28:53.785 11:16:22 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:53.785 11:16:22 -- common/autotest_common.sh@1359 -- # free_mb=4088 00:28:53.785 4088 00:28:53.785 11:16:22 -- common/autotest_common.sh@1360 -- # echo 4088 00:28:53.785 11:16:22 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:28:53.785 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.785 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:53.785 1c669f35-18b6-4f85-8b11-3860b8fb7452 00:28:53.785 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.785 11:16:22 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:53.785 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.785 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:53.785 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.785 11:16:22 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:53.785 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.785 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:53.785 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.785 11:16:22 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:53.785 11:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.785 11:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:53.785 11:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.785 11:16:22 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:53.785 11:16:22 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:53.785 11:16:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:53.785 11:16:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:53.785 11:16:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:53.785 11:16:22 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:53.785 11:16:22 -- common/autotest_common.sh@1327 -- # shift 00:28:53.785 11:16:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:53.785 11:16:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:53.785 11:16:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:53.785 11:16:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:53.785 11:16:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:53.785 11:16:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:53.785 11:16:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:53.785 11:16:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:54.042 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:54.042 fio-3.35 00:28:54.042 Starting 1 thread 00:28:56.568 00:28:56.568 test: (groupid=0, jobs=1): err= 0: pid=100351: Thu Apr 18 11:16:24 2024 00:28:56.568 read: IOPS=5497, BW=21.5MiB/s (22.5MB/s)(43.2MiB/2010msec) 00:28:56.568 slat (usec): min=2, max=252, avg= 2.81, stdev= 3.11 00:28:56.568 clat (usec): min=4471, max=22321, avg=12313.94, stdev=1365.59 00:28:56.568 lat (usec): min=4476, max=22324, avg=12316.75, stdev=1365.43 00:28:56.568 clat percentiles (usec): 00:28:56.568 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:28:56.568 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:28:56.568 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13829], 95.00th=[14615], 00:28:56.568 | 99.00th=[16909], 99.50th=[18220], 99.90th=[20055], 99.95th=[21103], 00:28:56.568 | 99.99th=[22152] 00:28:56.568 bw ( KiB/s): min=20024, max=22752, per=99.81%, avg=21946.00, stdev=1293.95, samples=4 00:28:56.568 iops : min= 5006, max= 5688, avg=5486.50, stdev=323.49, samples=4 00:28:56.568 write: IOPS=5459, BW=21.3MiB/s (22.4MB/s)(42.9MiB/2010msec); 0 zone resets 00:28:56.568 slat (usec): min=2, max=159, avg= 2.90, stdev= 1.95 00:28:56.568 clat (usec): min=2003, max=20050, avg=10959.67, stdev=1306.12 00:28:56.568 lat (usec): min=2010, max=20053, avg=10962.58, stdev=1306.01 00:28:56.568 clat percentiles (usec): 00:28:56.568 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:28:56.568 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:28:56.568 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[13042], 00:28:56.568 | 99.00th=[15401], 99.50th=[17171], 99.90th=[19792], 99.95th=[19792], 00:28:56.568 | 99.99th=[20055] 00:28:56.568 bw ( KiB/s): min=20888, max=22528, per=100.00%, avg=21846.00, stdev=692.68, samples=4 00:28:56.568 iops : min= 5222, max= 5632, avg=5461.50, stdev=173.17, samples=4 00:28:56.568 lat (msec) : 4=0.05%, 10=9.62%, 20=90.24%, 50=0.09% 00:28:56.568 cpu : usr=70.18%, sys=22.90%, ctx=8, majf=0, minf=6 00:28:56.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:56.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:56.568 issued rwts: total=11049,10974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:56.568 00:28:56.568 Run status group 0 (all jobs): 00:28:56.568 READ: bw=21.5MiB/s (22.5MB/s), 21.5MiB/s-21.5MiB/s (22.5MB/s-22.5MB/s), io=43.2MiB (45.3MB), run=2010-2010msec 00:28:56.568 WRITE: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s), io=42.9MiB (44.9MB), run=2010-2010msec 00:28:56.568 11:16:24 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:56.568 11:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.568 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:28:56.568 11:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.568 11:16:24 -- host/fio.sh@72 -- # sync 00:28:56.568 11:16:24 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:56.568 11:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.568 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:28:56.568 11:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.568 11:16:24 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:28:56.568 11:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.568 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:28:56.568 11:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.568 11:16:24 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:28:56.568 11:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.568 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:28:56.568 11:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.568 11:16:24 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:28:56.568 11:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.568 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:28:56.568 11:16:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.568 11:16:24 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:28:56.568 11:16:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.568 11:16:24 -- common/autotest_common.sh@10 -- # set +x 00:28:57.133 11:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.133 11:16:25 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:28:57.133 11:16:25 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:28:57.133 11:16:25 -- host/fio.sh@84 -- # nvmftestfini 00:28:57.133 11:16:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:57.133 11:16:25 -- nvmf/common.sh@117 -- # sync 00:28:57.133 11:16:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.133 11:16:25 -- nvmf/common.sh@120 -- # set +e 00:28:57.133 11:16:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.133 11:16:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.133 rmmod nvme_tcp 00:28:57.133 rmmod nvme_fabrics 00:28:57.133 rmmod nvme_keyring 00:28:57.133 11:16:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.133 11:16:25 -- nvmf/common.sh@124 -- # set -e 00:28:57.133 11:16:25 -- nvmf/common.sh@125 -- # return 0 00:28:57.133 11:16:25 -- nvmf/common.sh@478 -- # '[' -n 100089 ']' 00:28:57.133 11:16:25 -- nvmf/common.sh@479 -- # killprocess 100089 00:28:57.133 11:16:25 -- common/autotest_common.sh@936 -- # '[' -z 100089 ']' 00:28:57.133 11:16:25 -- common/autotest_common.sh@940 -- # kill -0 100089 00:28:57.133 11:16:25 -- common/autotest_common.sh@941 -- # uname 00:28:57.133 11:16:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:57.133 11:16:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100089 00:28:57.133 11:16:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:57.133 11:16:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:57.133 11:16:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100089' 00:28:57.133 killing process with pid 100089 00:28:57.133 11:16:25 -- common/autotest_common.sh@955 -- # kill 100089 00:28:57.133 11:16:25 -- common/autotest_common.sh@960 -- # wait 100089 00:28:57.391 11:16:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:57.391 11:16:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:57.391 11:16:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:57.391 11:16:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.391 11:16:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.391 11:16:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.391 11:16:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.391 11:16:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.391 11:16:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:57.391 00:28:57.391 real 0m13.089s 00:28:57.391 user 0m54.350s 00:28:57.391 sys 0m3.662s 00:28:57.391 11:16:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:57.391 ************************************ 00:28:57.391 END TEST nvmf_fio_host 00:28:57.391 11:16:25 -- common/autotest_common.sh@10 -- # set +x 00:28:57.391 ************************************ 00:28:57.391 11:16:25 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:57.391 11:16:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:57.391 11:16:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:57.391 11:16:25 -- common/autotest_common.sh@10 -- # set +x 00:28:57.648 ************************************ 00:28:57.648 START TEST nvmf_failover 00:28:57.648 ************************************ 00:28:57.648 11:16:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:57.648 * Looking for test storage... 00:28:57.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:57.648 11:16:26 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:57.648 11:16:26 -- nvmf/common.sh@7 -- # uname -s 00:28:57.648 11:16:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.648 11:16:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.648 11:16:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.648 11:16:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.648 11:16:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.648 11:16:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.648 11:16:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.648 11:16:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.648 11:16:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.648 11:16:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.648 11:16:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:28:57.648 11:16:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:28:57.648 11:16:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.648 11:16:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.648 11:16:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:57.648 11:16:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.648 11:16:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:57.648 11:16:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.648 11:16:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.648 11:16:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.648 11:16:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.648 11:16:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.648 11:16:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.648 11:16:26 -- paths/export.sh@5 -- # export PATH 00:28:57.648 11:16:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.648 11:16:26 -- nvmf/common.sh@47 -- # : 0 00:28:57.648 11:16:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:57.648 11:16:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:57.648 11:16:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.648 11:16:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.649 11:16:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.649 11:16:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:57.649 11:16:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:57.649 11:16:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:57.649 11:16:26 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:57.649 11:16:26 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:57.649 11:16:26 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:57.649 11:16:26 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:57.649 11:16:26 -- host/failover.sh@18 -- # nvmftestinit 00:28:57.649 11:16:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:57.649 11:16:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.649 11:16:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:57.649 11:16:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:57.649 11:16:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:57.649 11:16:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.649 11:16:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.649 11:16:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.649 11:16:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:57.649 11:16:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:57.649 11:16:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:57.649 11:16:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:57.649 11:16:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:57.649 11:16:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:57.649 11:16:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.649 11:16:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.649 11:16:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:57.649 11:16:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:57.649 11:16:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:57.649 11:16:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:57.649 11:16:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:57.649 11:16:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.649 11:16:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:57.649 11:16:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:57.649 11:16:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:57.649 11:16:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:57.649 11:16:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:57.649 11:16:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:57.649 Cannot find device "nvmf_tgt_br" 00:28:57.649 11:16:26 -- nvmf/common.sh@155 -- # true 00:28:57.649 11:16:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:57.649 Cannot find device "nvmf_tgt_br2" 00:28:57.649 11:16:26 -- nvmf/common.sh@156 -- # true 00:28:57.649 11:16:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:57.649 11:16:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:57.649 Cannot find device "nvmf_tgt_br" 00:28:57.649 11:16:26 -- nvmf/common.sh@158 -- # true 00:28:57.649 11:16:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:57.649 Cannot find device "nvmf_tgt_br2" 00:28:57.649 11:16:26 -- nvmf/common.sh@159 -- # true 00:28:57.649 11:16:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:57.649 11:16:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:57.907 11:16:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:57.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:57.907 11:16:26 -- nvmf/common.sh@162 -- # true 00:28:57.907 11:16:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:57.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:57.907 11:16:26 -- nvmf/common.sh@163 -- # true 00:28:57.907 11:16:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:57.907 11:16:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:57.907 11:16:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:57.907 11:16:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:57.907 11:16:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:57.907 11:16:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:57.907 11:16:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:57.907 11:16:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:57.907 11:16:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:57.907 11:16:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:57.907 11:16:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:57.907 11:16:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:57.907 11:16:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:57.907 11:16:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:57.907 11:16:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:57.907 11:16:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:57.907 11:16:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:57.907 11:16:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:57.907 11:16:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:57.907 11:16:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:57.907 11:16:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:57.907 11:16:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:57.907 11:16:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:57.907 11:16:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:57.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:28:57.907 00:28:57.907 --- 10.0.0.2 ping statistics --- 00:28:57.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.907 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:28:57.907 11:16:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:57.907 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:57.907 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:28:57.907 00:28:57.907 --- 10.0.0.3 ping statistics --- 00:28:57.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.907 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:28:57.907 11:16:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:57.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:28:57.907 00:28:57.907 --- 10.0.0.1 ping statistics --- 00:28:57.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.907 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:28:57.907 11:16:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.907 11:16:26 -- nvmf/common.sh@422 -- # return 0 00:28:57.907 11:16:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:57.907 11:16:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.907 11:16:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:57.907 11:16:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:57.907 11:16:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.907 11:16:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:57.907 11:16:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:57.907 11:16:26 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:57.907 11:16:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:57.907 11:16:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:57.907 11:16:26 -- common/autotest_common.sh@10 -- # set +x 00:28:57.907 11:16:26 -- nvmf/common.sh@470 -- # nvmfpid=100567 00:28:57.907 11:16:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:57.907 11:16:26 -- nvmf/common.sh@471 -- # waitforlisten 100567 00:28:57.907 11:16:26 -- common/autotest_common.sh@817 -- # '[' -z 100567 ']' 00:28:57.907 11:16:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.907 11:16:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:57.907 11:16:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.907 11:16:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:57.907 11:16:26 -- common/autotest_common.sh@10 -- # set +x 00:28:58.165 [2024-04-18 11:16:26.579543] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:58.165 [2024-04-18 11:16:26.579643] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.165 [2024-04-18 11:16:26.720880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.422 [2024-04-18 11:16:26.815069] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.422 [2024-04-18 11:16:26.815133] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.422 [2024-04-18 11:16:26.815145] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.422 [2024-04-18 11:16:26.815154] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.422 [2024-04-18 11:16:26.815161] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.422 [2024-04-18 11:16:26.815276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.422 [2024-04-18 11:16:26.815354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.422 [2024-04-18 11:16:26.815600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.987 11:16:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:58.987 11:16:27 -- common/autotest_common.sh@850 -- # return 0 00:28:58.987 11:16:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:58.987 11:16:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:58.987 11:16:27 -- common/autotest_common.sh@10 -- # set +x 00:28:58.987 11:16:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.987 11:16:27 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:59.245 [2024-04-18 11:16:27.760108] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.245 11:16:27 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:59.502 Malloc0 00:28:59.502 11:16:28 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.760 11:16:28 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.018 11:16:28 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.276 [2024-04-18 11:16:28.745418] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.276 11:16:28 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:00.535 [2024-04-18 11:16:28.981596] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:00.535 11:16:29 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:00.793 [2024-04-18 11:16:29.217790] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:00.793 11:16:29 -- host/failover.sh@31 -- # bdevperf_pid=100679 00:29:00.793 11:16:29 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:00.793 11:16:29 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:00.793 11:16:29 -- host/failover.sh@34 -- # waitforlisten 100679 /var/tmp/bdevperf.sock 00:29:00.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.793 11:16:29 -- common/autotest_common.sh@817 -- # '[' -z 100679 ']' 00:29:00.793 11:16:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.793 11:16:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:00.793 11:16:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.793 11:16:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:00.793 11:16:29 -- common/autotest_common.sh@10 -- # set +x 00:29:01.726 11:16:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:01.726 11:16:30 -- common/autotest_common.sh@850 -- # return 0 00:29:01.726 11:16:30 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:01.984 NVMe0n1 00:29:02.242 11:16:30 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.500 00:29:02.500 11:16:30 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:02.500 11:16:30 -- host/failover.sh@39 -- # run_test_pid=100732 00:29:02.500 11:16:30 -- host/failover.sh@41 -- # sleep 1 00:29:03.434 11:16:31 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.694 [2024-04-18 11:16:32.242235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.242894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.242999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.243995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.244985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.694 [2024-04-18 11:16:32.245054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.245981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 [2024-04-18 11:16:32.246650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a5770 is same with the state(5) to be set 00:29:03.695 11:16:32 -- host/failover.sh@45 -- # sleep 3 00:29:06.979 11:16:35 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:07.243 00:29:07.243 11:16:35 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:07.522 [2024-04-18 11:16:35.950846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6310 is same with the state(5) to be set 00:29:07.522 [2024-04-18 11:16:35.950902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6310 is same with the state(5) to be set 00:29:07.522 [2024-04-18 11:16:35.950914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6310 is same with the state(5) to be set 00:29:07.522 [2024-04-18 11:16:35.950923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6310 is same with the state(5) to be set 00:29:07.522 [2024-04-18 11:16:35.950932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6310 is same with the state(5) to be set 00:29:07.522 [2024-04-18 11:16:35.950941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6310 is same with the state(5) to be set 00:29:07.522 [2024-04-18 11:16:35.950950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a6310 is same with the state(5) to be set 00:29:07.522 11:16:35 -- host/failover.sh@50 -- # sleep 3 00:29:10.803 11:16:38 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.803 [2024-04-18 11:16:39.197258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.803 11:16:39 -- host/failover.sh@55 -- # sleep 1 00:29:11.738 11:16:40 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:11.997 [2024-04-18 11:16:40.482495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.482991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.483007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.483016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.483042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.483053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.483061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.997 [2024-04-18 11:16:40.483070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 [2024-04-18 11:16:40.483440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd5a0 is same with the state(5) to be set 00:29:11.998 11:16:40 -- host/failover.sh@59 -- # wait 100732 00:29:18.607 0 00:29:18.607 11:16:46 -- host/failover.sh@61 -- # killprocess 100679 00:29:18.607 11:16:46 -- common/autotest_common.sh@936 -- # '[' -z 100679 ']' 00:29:18.607 11:16:46 -- common/autotest_common.sh@940 -- # kill -0 100679 00:29:18.607 11:16:46 -- common/autotest_common.sh@941 -- # uname 00:29:18.607 11:16:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:18.607 11:16:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100679 00:29:18.607 killing process with pid 100679 00:29:18.607 11:16:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:18.607 11:16:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:18.607 11:16:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100679' 00:29:18.607 11:16:46 -- common/autotest_common.sh@955 -- # kill 100679 00:29:18.607 11:16:46 -- common/autotest_common.sh@960 -- # wait 100679 00:29:18.607 11:16:46 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:18.607 [2024-04-18 11:16:29.289746] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:18.607 [2024-04-18 11:16:29.289895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100679 ] 00:29:18.607 [2024-04-18 11:16:29.426698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.607 [2024-04-18 11:16:29.521645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.607 Running I/O for 15 seconds... 00:29:18.607 [2024-04-18 11:16:32.247116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.607 [2024-04-18 11:16:32.247943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.247973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.247988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.248001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.248016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.248041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.248060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.248074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.248089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.248102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.248117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.248131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.248155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.248171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.607 [2024-04-18 11:16:32.248186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.607 [2024-04-18 11:16:32.248201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.248985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.248999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.608 [2024-04-18 11:16:32.249366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.608 [2024-04-18 11:16:32.249381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.249395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.249979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.249993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.609 [2024-04-18 11:16:32.250340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.609 [2024-04-18 11:16:32.250572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.609 [2024-04-18 11:16:32.250593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.250973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.250989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.251003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.251018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:32.251042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.251059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2001f80 is same with the state(5) to be set 00:29:18.610 [2024-04-18 11:16:32.251075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.610 [2024-04-18 11:16:32.251085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.610 [2024-04-18 11:16:32.251096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:29:18.610 [2024-04-18 11:16:32.251109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.251166] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2001f80 was disconnected and freed. reset controller. 00:29:18.610 [2024-04-18 11:16:32.251193] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:18.610 [2024-04-18 11:16:32.251249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.610 [2024-04-18 11:16:32.251270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.251285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.610 [2024-04-18 11:16:32.251298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.251312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.610 [2024-04-18 11:16:32.251325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.251338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.610 [2024-04-18 11:16:32.251351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:32.251364] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.610 [2024-04-18 11:16:32.251431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe34b0 (9): Bad file descriptor 00:29:18.610 [2024-04-18 11:16:32.255471] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.610 [2024-04-18 11:16:32.290497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:18.610 [2024-04-18 11:16:35.952448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.610 [2024-04-18 11:16:35.952936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.610 [2024-04-18 11:16:35.952975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.952990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.610 [2024-04-18 11:16:35.953004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.610 [2024-04-18 11:16:35.953019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.610 [2024-04-18 11:16:35.953046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.953980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.611 [2024-04-18 11:16:35.953994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.611 [2024-04-18 11:16:35.954009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.612 [2024-04-18 11:16:35.954529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.612 [2024-04-18 11:16:35.954558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.612 [2024-04-18 11:16:35.954588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.612 [2024-04-18 11:16:35.954617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.612 [2024-04-18 11:16:35.954647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.612 [2024-04-18 11:16:35.954676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.612 [2024-04-18 11:16:35.954705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.954962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.612 [2024-04-18 11:16:35.954975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.955020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.612 [2024-04-18 11:16:35.955047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:29:18.612 [2024-04-18 11:16:35.955063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.955136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.612 [2024-04-18 11:16:35.955160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.955177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.612 [2024-04-18 11:16:35.955203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.955219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.612 [2024-04-18 11:16:35.955232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.955246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.612 [2024-04-18 11:16:35.955259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.955272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe34b0 is same with the state(5) to be set 00:29:18.612 [2024-04-18 11:16:35.955474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.612 [2024-04-18 11:16:35.955493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.612 [2024-04-18 11:16:35.955508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88176 len:8 PRP1 0x0 PRP2 0x0 00:29:18.612 [2024-04-18 11:16:35.955522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.612 [2024-04-18 11:16:35.955539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.612 [2024-04-18 11:16:35.955549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.612 [2024-04-18 11:16:35.955560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88184 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88208 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88216 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88224 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88232 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88240 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.955945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88248 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.955965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.955979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.955990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88264 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88288 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88296 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88328 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.613 [2024-04-18 11:16:35.956685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:29:18.613 [2024-04-18 11:16:35.956698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.613 [2024-04-18 11:16:35.956717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.613 [2024-04-18 11:16:35.956727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.956737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.956750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.956764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.956774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.956783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.956796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.956809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.956819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.956829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.956842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.956855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.956865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.956875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.956888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.956901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.956911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.956926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.956940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.956954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.956970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.956981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.956994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87576 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87584 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87592 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87600 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87608 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87616 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87624 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87632 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87640 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87648 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87656 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87664 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87672 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87680 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87408 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87416 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.614 [2024-04-18 11:16:35.957855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.614 [2024-04-18 11:16:35.957866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87424 len:8 PRP1 0x0 PRP2 0x0 00:29:18.614 [2024-04-18 11:16:35.957879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.614 [2024-04-18 11:16:35.957892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.957902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.957912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87432 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.957925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.957938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.957948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.957958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87440 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.957971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.957984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.957993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.958004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87448 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.958016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.958039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.958052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.958062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87456 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.958075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.958089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.958098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.958113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87464 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.958127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.958140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.958154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.958165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87472 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.958178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.958192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.958201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.958212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87480 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.958225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.958245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.958256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.958266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87488 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.958279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.968647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.968694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.968709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87496 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.968724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.968739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.968750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.968760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87504 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.968774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.968788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.968797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.968808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87512 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.968821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.968835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.968845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.968855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87688 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.968868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.968883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.968893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.968904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87696 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.968917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.968932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.968942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.968952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87704 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.968966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.968979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.968989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.968999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87712 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.969047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.969065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.969075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.969085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87720 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.969098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.969111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.969120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.969130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.969143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.969156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.969165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.969175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87736 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.969188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.969201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.969210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.969221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87744 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.969233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.969247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.969256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.969266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87752 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.969279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.969292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.615 [2024-04-18 11:16:35.969301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.615 [2024-04-18 11:16:35.969312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87760 len:8 PRP1 0x0 PRP2 0x0 00:29:18.615 [2024-04-18 11:16:35.969325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.615 [2024-04-18 11:16:35.969339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87768 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87776 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87784 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87792 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87800 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87808 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87816 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87824 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87832 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87840 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87848 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87856 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87864 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.969961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.969971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87872 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.969983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.969996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87880 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87888 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87896 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87904 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87912 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87920 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87928 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87936 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.616 [2024-04-18 11:16:35.970386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.616 [2024-04-18 11:16:35.970395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.616 [2024-04-18 11:16:35.970405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87944 len:8 PRP1 0x0 PRP2 0x0 00:29:18.616 [2024-04-18 11:16:35.970418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87952 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87960 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87968 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87976 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87984 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87992 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88000 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88008 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88016 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88032 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.970957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.970966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88040 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.970979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.970992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88048 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88056 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88064 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88072 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88080 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88088 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87520 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87528 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87536 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87544 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.617 [2024-04-18 11:16:35.971613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.617 [2024-04-18 11:16:35.971628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87552 len:8 PRP1 0x0 PRP2 0x0 00:29:18.617 [2024-04-18 11:16:35.971646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.617 [2024-04-18 11:16:35.971665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.971678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.971693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87560 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.971711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.971730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.971744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.971758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87568 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.971785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.971805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.971819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.971834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88096 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.971852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.971871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.971884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.971899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88104 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.971917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.971936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.971950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.971973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88112 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.971992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.972024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.972039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.972072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.972106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.972121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88128 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.972139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.972172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.972187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88136 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.972205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.972237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.972252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88144 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.972270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.972303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.972327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88152 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.972346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.972386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.972401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88160 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.972419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.618 [2024-04-18 11:16:35.972452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.618 [2024-04-18 11:16:35.972467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:29:18.618 [2024-04-18 11:16:35.972485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:35.972585] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fef9a0 was disconnected and freed. reset controller. 00:29:18.618 [2024-04-18 11:16:35.972610] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:18.618 [2024-04-18 11:16:35.972636] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.618 [2024-04-18 11:16:35.972733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe34b0 (9): Bad file descriptor 00:29:18.618 [2024-04-18 11:16:35.978500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.618 [2024-04-18 11:16:36.013305] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:18.618 [2024-04-18 11:16:40.483551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.483969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.483992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.484006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.484020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.484054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.484078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.484093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.484107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.484122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.484136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.618 [2024-04-18 11:16:40.484151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.618 [2024-04-18 11:16:40.484166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.484979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.484995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.619 [2024-04-18 11:16:40.485559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.619 [2024-04-18 11:16:40.485580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.485979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.485995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.620 [2024-04-18 11:16:40.486632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.620 [2024-04-18 11:16:40.486647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-04-18 11:16:40.486661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-04-18 11:16:40.486689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-04-18 11:16:40.486717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-04-18 11:16:40.486746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-04-18 11:16:40.486782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-04-18 11:16:40.486816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.486851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.486880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.486909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.486938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.486966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.486987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.621 [2024-04-18 11:16:40.487785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ac580 is same with the state(5) to be set 00:29:18.621 [2024-04-18 11:16:40.487821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:18.621 [2024-04-18 11:16:40.487832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:18.621 [2024-04-18 11:16:40.487843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14448 len:8 PRP1 0x0 PRP2 0x0 00:29:18.621 [2024-04-18 11:16:40.487856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.621 [2024-04-18 11:16:40.487914] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ac580 was disconnected and freed. reset controller. 00:29:18.621 [2024-04-18 11:16:40.487933] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:18.621 [2024-04-18 11:16:40.487998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.622 [2024-04-18 11:16:40.488020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.622 [2024-04-18 11:16:40.488050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.622 [2024-04-18 11:16:40.488066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.622 [2024-04-18 11:16:40.488091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.622 [2024-04-18 11:16:40.488105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.622 [2024-04-18 11:16:40.488119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.622 [2024-04-18 11:16:40.488132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.622 [2024-04-18 11:16:40.488146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.622 [2024-04-18 11:16:40.488197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe34b0 (9): Bad file descriptor 00:29:18.622 [2024-04-18 11:16:40.492178] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.622 [2024-04-18 11:16:40.523541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:18.622 00:29:18.622 Latency(us) 00:29:18.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.622 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:18.622 Verification LBA range: start 0x0 length 0x4000 00:29:18.622 NVMe0n1 : 15.01 8658.42 33.82 200.70 0.00 14417.03 595.78 31218.97 00:29:18.622 =================================================================================================================== 00:29:18.622 Total : 8658.42 33.82 200.70 0.00 14417.03 595.78 31218.97 00:29:18.622 Received shutdown signal, test time was about 15.000000 seconds 00:29:18.622 00:29:18.622 Latency(us) 00:29:18.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.622 =================================================================================================================== 00:29:18.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.622 11:16:46 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:18.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:18.622 11:16:46 -- host/failover.sh@65 -- # count=3 00:29:18.622 11:16:46 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:18.622 11:16:46 -- host/failover.sh@73 -- # bdevperf_pid=100930 00:29:18.622 11:16:46 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:18.622 11:16:46 -- host/failover.sh@75 -- # waitforlisten 100930 /var/tmp/bdevperf.sock 00:29:18.622 11:16:46 -- common/autotest_common.sh@817 -- # '[' -z 100930 ']' 00:29:18.622 11:16:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:18.622 11:16:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:18.622 11:16:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:18.622 11:16:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:18.622 11:16:46 -- common/autotest_common.sh@10 -- # set +x 00:29:18.880 11:16:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:18.880 11:16:47 -- common/autotest_common.sh@850 -- # return 0 00:29:18.880 11:16:47 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:19.138 [2024-04-18 11:16:47.668671] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:19.138 11:16:47 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:19.396 [2024-04-18 11:16:47.921007] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:19.396 11:16:47 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.655 NVMe0n1 00:29:19.912 11:16:48 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.170 00:29:20.170 11:16:48 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.429 00:29:20.429 11:16:48 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.429 11:16:48 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:20.687 11:16:49 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.946 11:16:49 -- host/failover.sh@87 -- # sleep 3 00:29:24.233 11:16:52 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.233 11:16:52 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:24.233 11:16:52 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:24.233 11:16:52 -- host/failover.sh@90 -- # run_test_pid=101067 00:29:24.233 11:16:52 -- host/failover.sh@92 -- # wait 101067 00:29:25.616 0 00:29:25.616 11:16:53 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:25.616 [2024-04-18 11:16:46.396973] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:25.616 [2024-04-18 11:16:46.397097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100930 ] 00:29:25.616 [2024-04-18 11:16:46.532877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.616 [2024-04-18 11:16:46.628673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.616 [2024-04-18 11:16:49.482732] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:25.616 [2024-04-18 11:16:49.482846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.616 [2024-04-18 11:16:49.482871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.616 [2024-04-18 11:16:49.482891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.616 [2024-04-18 11:16:49.482905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.616 [2024-04-18 11:16:49.482919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.616 [2024-04-18 11:16:49.482932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.616 [2024-04-18 11:16:49.482946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.616 [2024-04-18 11:16:49.482959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.616 [2024-04-18 11:16:49.482973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.616 [2024-04-18 11:16:49.483015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.616 [2024-04-18 11:16:49.483057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2c4b0 (9): Bad file descriptor 00:29:25.616 [2024-04-18 11:16:49.489111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:25.616 Running I/O for 1 seconds... 00:29:25.616 00:29:25.616 Latency(us) 00:29:25.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.616 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:25.616 Verification LBA range: start 0x0 length 0x4000 00:29:25.616 NVMe0n1 : 1.00 8930.27 34.88 0.00 0.00 14256.72 2144.81 14477.50 00:29:25.616 =================================================================================================================== 00:29:25.616 Total : 8930.27 34.88 0.00 0.00 14256.72 2144.81 14477.50 00:29:25.616 11:16:53 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:25.616 11:16:53 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:25.616 11:16:54 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.874 11:16:54 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:25.874 11:16:54 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:26.131 11:16:54 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:26.389 11:16:54 -- host/failover.sh@101 -- # sleep 3 00:29:29.670 11:16:57 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:29.670 11:16:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:29.670 11:16:58 -- host/failover.sh@108 -- # killprocess 100930 00:29:29.670 11:16:58 -- common/autotest_common.sh@936 -- # '[' -z 100930 ']' 00:29:29.670 11:16:58 -- common/autotest_common.sh@940 -- # kill -0 100930 00:29:29.670 11:16:58 -- common/autotest_common.sh@941 -- # uname 00:29:29.670 11:16:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:29.670 11:16:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100930 00:29:29.670 11:16:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:29.670 killing process with pid 100930 00:29:29.670 11:16:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:29.670 11:16:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100930' 00:29:29.670 11:16:58 -- common/autotest_common.sh@955 -- # kill 100930 00:29:29.671 11:16:58 -- common/autotest_common.sh@960 -- # wait 100930 00:29:29.929 11:16:58 -- host/failover.sh@110 -- # sync 00:29:29.929 11:16:58 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:30.187 11:16:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:30.187 11:16:58 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:30.187 11:16:58 -- host/failover.sh@116 -- # nvmftestfini 00:29:30.187 11:16:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:30.187 11:16:58 -- nvmf/common.sh@117 -- # sync 00:29:30.187 11:16:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.187 11:16:58 -- nvmf/common.sh@120 -- # set +e 00:29:30.187 11:16:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.187 11:16:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.187 rmmod nvme_tcp 00:29:30.187 rmmod nvme_fabrics 00:29:30.187 rmmod nvme_keyring 00:29:30.187 11:16:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.187 11:16:58 -- nvmf/common.sh@124 -- # set -e 00:29:30.187 11:16:58 -- nvmf/common.sh@125 -- # return 0 00:29:30.187 11:16:58 -- nvmf/common.sh@478 -- # '[' -n 100567 ']' 00:29:30.187 11:16:58 -- nvmf/common.sh@479 -- # killprocess 100567 00:29:30.187 11:16:58 -- common/autotest_common.sh@936 -- # '[' -z 100567 ']' 00:29:30.187 11:16:58 -- common/autotest_common.sh@940 -- # kill -0 100567 00:29:30.187 11:16:58 -- common/autotest_common.sh@941 -- # uname 00:29:30.187 11:16:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:30.187 11:16:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100567 00:29:30.187 11:16:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:30.187 11:16:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:30.187 killing process with pid 100567 00:29:30.187 11:16:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100567' 00:29:30.187 11:16:58 -- common/autotest_common.sh@955 -- # kill 100567 00:29:30.187 11:16:58 -- common/autotest_common.sh@960 -- # wait 100567 00:29:30.459 11:16:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:30.459 11:16:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:30.459 11:16:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:30.459 11:16:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.459 11:16:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.459 11:16:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.459 11:16:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.459 11:16:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.459 11:16:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:30.459 00:29:30.459 real 0m32.920s 00:29:30.459 user 2m8.539s 00:29:30.459 sys 0m4.598s 00:29:30.459 11:16:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:30.459 11:16:58 -- common/autotest_common.sh@10 -- # set +x 00:29:30.459 ************************************ 00:29:30.459 END TEST nvmf_failover 00:29:30.459 ************************************ 00:29:30.459 11:16:59 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:30.459 11:16:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:30.459 11:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.459 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:29:30.459 ************************************ 00:29:30.459 START TEST nvmf_discovery 00:29:30.459 ************************************ 00:29:30.459 11:16:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:30.718 * Looking for test storage... 00:29:30.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:30.718 11:16:59 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:30.718 11:16:59 -- nvmf/common.sh@7 -- # uname -s 00:29:30.718 11:16:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.718 11:16:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.718 11:16:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.718 11:16:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.718 11:16:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.718 11:16:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.718 11:16:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.718 11:16:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.718 11:16:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.718 11:16:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.718 11:16:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:30.718 11:16:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:30.718 11:16:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.718 11:16:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.718 11:16:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:30.718 11:16:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.718 11:16:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:30.718 11:16:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.718 11:16:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.718 11:16:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.718 11:16:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.718 11:16:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.718 11:16:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.718 11:16:59 -- paths/export.sh@5 -- # export PATH 00:29:30.718 11:16:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.718 11:16:59 -- nvmf/common.sh@47 -- # : 0 00:29:30.718 11:16:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.718 11:16:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.718 11:16:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.718 11:16:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.718 11:16:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.718 11:16:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.718 11:16:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.718 11:16:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.718 11:16:59 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:30.718 11:16:59 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:30.718 11:16:59 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:30.718 11:16:59 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:30.718 11:16:59 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:30.718 11:16:59 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:30.718 11:16:59 -- host/discovery.sh@25 -- # nvmftestinit 00:29:30.718 11:16:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:30.718 11:16:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.718 11:16:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:30.718 11:16:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:30.718 11:16:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:30.718 11:16:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.718 11:16:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.718 11:16:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.718 11:16:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:30.718 11:16:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:30.718 11:16:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:30.718 11:16:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:30.718 11:16:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:30.718 11:16:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:30.718 11:16:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.718 11:16:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.718 11:16:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:30.718 11:16:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:30.718 11:16:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:30.718 11:16:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:30.718 11:16:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:30.718 11:16:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.718 11:16:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:30.718 11:16:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:30.718 11:16:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:30.718 11:16:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:30.718 11:16:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:30.718 11:16:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:30.719 Cannot find device "nvmf_tgt_br" 00:29:30.719 11:16:59 -- nvmf/common.sh@155 -- # true 00:29:30.719 11:16:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:30.719 Cannot find device "nvmf_tgt_br2" 00:29:30.719 11:16:59 -- nvmf/common.sh@156 -- # true 00:29:30.719 11:16:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:30.719 11:16:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:30.719 Cannot find device "nvmf_tgt_br" 00:29:30.719 11:16:59 -- nvmf/common.sh@158 -- # true 00:29:30.719 11:16:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:30.719 Cannot find device "nvmf_tgt_br2" 00:29:30.719 11:16:59 -- nvmf/common.sh@159 -- # true 00:29:30.719 11:16:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:30.719 11:16:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:30.719 11:16:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:30.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:30.719 11:16:59 -- nvmf/common.sh@162 -- # true 00:29:30.719 11:16:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:30.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:30.719 11:16:59 -- nvmf/common.sh@163 -- # true 00:29:30.719 11:16:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:30.719 11:16:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:30.719 11:16:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:30.719 11:16:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:30.719 11:16:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:30.719 11:16:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:30.978 11:16:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:30.978 11:16:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:30.978 11:16:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:30.978 11:16:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:30.978 11:16:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:30.978 11:16:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:30.978 11:16:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:30.978 11:16:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:30.978 11:16:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:30.978 11:16:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:30.978 11:16:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:30.978 11:16:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:30.978 11:16:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:30.978 11:16:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:30.978 11:16:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:30.978 11:16:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:30.978 11:16:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:30.978 11:16:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:30.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:29:30.978 00:29:30.978 --- 10.0.0.2 ping statistics --- 00:29:30.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.978 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:30.978 11:16:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:30.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:30.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:29:30.978 00:29:30.978 --- 10.0.0.3 ping statistics --- 00:29:30.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.978 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:29:30.978 11:16:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:30.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:29:30.978 00:29:30.978 --- 10.0.0.1 ping statistics --- 00:29:30.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.978 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:29:30.978 11:16:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.978 11:16:59 -- nvmf/common.sh@422 -- # return 0 00:29:30.978 11:16:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:30.978 11:16:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.978 11:16:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:30.978 11:16:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:30.978 11:16:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.978 11:16:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:30.978 11:16:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:30.978 11:16:59 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:30.978 11:16:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:30.978 11:16:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:30.978 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:29:30.978 11:16:59 -- nvmf/common.sh@470 -- # nvmfpid=101382 00:29:30.978 11:16:59 -- nvmf/common.sh@471 -- # waitforlisten 101382 00:29:30.978 11:16:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:30.978 11:16:59 -- common/autotest_common.sh@817 -- # '[' -z 101382 ']' 00:29:30.978 11:16:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.978 11:16:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:30.978 11:16:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.978 11:16:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:30.978 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:29:30.978 [2024-04-18 11:16:59.577185] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:30.978 [2024-04-18 11:16:59.577274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.236 [2024-04-18 11:16:59.714025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.236 [2024-04-18 11:16:59.809870] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.236 [2024-04-18 11:16:59.809948] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.236 [2024-04-18 11:16:59.809960] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.236 [2024-04-18 11:16:59.809969] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.236 [2024-04-18 11:16:59.809976] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.236 [2024-04-18 11:16:59.810014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.170 11:17:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:32.170 11:17:00 -- common/autotest_common.sh@850 -- # return 0 00:29:32.170 11:17:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:32.170 11:17:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:32.170 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 11:17:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.170 11:17:00 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.170 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.170 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 [2024-04-18 11:17:00.535932] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.170 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.170 11:17:00 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:32.170 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.170 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 [2024-04-18 11:17:00.544071] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:32.170 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.170 11:17:00 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:32.170 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.170 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 null0 00:29:32.170 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.170 11:17:00 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:32.170 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.170 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 null1 00:29:32.170 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.170 11:17:00 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:32.170 11:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:32.170 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 11:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:32.170 11:17:00 -- host/discovery.sh@45 -- # hostpid=101432 00:29:32.170 11:17:00 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:32.170 11:17:00 -- host/discovery.sh@46 -- # waitforlisten 101432 /tmp/host.sock 00:29:32.170 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:32.170 11:17:00 -- common/autotest_common.sh@817 -- # '[' -z 101432 ']' 00:29:32.170 11:17:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:32.170 11:17:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:32.170 11:17:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:32.170 11:17:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:32.170 11:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:32.170 [2024-04-18 11:17:00.643335] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:32.170 [2024-04-18 11:17:00.643469] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101432 ] 00:29:32.170 [2024-04-18 11:17:00.789095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.428 [2024-04-18 11:17:00.883401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.359 11:17:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:33.359 11:17:01 -- common/autotest_common.sh@850 -- # return 0 00:29:33.359 11:17:01 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.359 11:17:01 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@72 -- # notify_id=0 00:29:33.360 11:17:01 -- host/discovery.sh@83 -- # get_subsystem_names 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # sort 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # xargs 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:33.360 11:17:01 -- host/discovery.sh@84 -- # get_bdev_list 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # sort 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # xargs 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:33.360 11:17:01 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@87 -- # get_subsystem_names 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # sort 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # xargs 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:33.360 11:17:01 -- host/discovery.sh@88 -- # get_bdev_list 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # sort 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # xargs 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:33.360 11:17:01 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@91 -- # get_subsystem_names 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # sort 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.360 11:17:01 -- host/discovery.sh@59 -- # xargs 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.360 11:17:01 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:33.360 11:17:01 -- host/discovery.sh@92 -- # get_bdev_list 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.360 11:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.360 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # sort 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.360 11:17:01 -- host/discovery.sh@55 -- # xargs 00:29:33.360 11:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.618 11:17:02 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:33.618 11:17:02 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:33.618 11:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.618 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 [2024-04-18 11:17:02.036552] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.618 11:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.618 11:17:02 -- host/discovery.sh@97 -- # get_subsystem_names 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.618 11:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # sort 00:29:33.618 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # xargs 00:29:33.618 11:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.618 11:17:02 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:33.618 11:17:02 -- host/discovery.sh@98 -- # get_bdev_list 00:29:33.618 11:17:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.618 11:17:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.618 11:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.618 11:17:02 -- host/discovery.sh@55 -- # sort 00:29:33.618 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:17:02 -- host/discovery.sh@55 -- # xargs 00:29:33.618 11:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.618 11:17:02 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:33.618 11:17:02 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:33.618 11:17:02 -- host/discovery.sh@79 -- # expected_count=0 00:29:33.618 11:17:02 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:33.618 11:17:02 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:33.618 11:17:02 -- common/autotest_common.sh@901 -- # local max=10 00:29:33.618 11:17:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:33.618 11:17:02 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:33.618 11:17:02 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:33.618 11:17:02 -- host/discovery.sh@74 -- # jq '. | length' 00:29:33.618 11:17:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:33.618 11:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.618 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.618 11:17:02 -- host/discovery.sh@74 -- # notification_count=0 00:29:33.618 11:17:02 -- host/discovery.sh@75 -- # notify_id=0 00:29:33.618 11:17:02 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:33.618 11:17:02 -- common/autotest_common.sh@904 -- # return 0 00:29:33.618 11:17:02 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:33.618 11:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.618 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.618 11:17:02 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:33.618 11:17:02 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:33.618 11:17:02 -- common/autotest_common.sh@901 -- # local max=10 00:29:33.618 11:17:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:33.618 11:17:02 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:33.618 11:17:02 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # sort 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.618 11:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:33.618 11:17:02 -- host/discovery.sh@59 -- # xargs 00:29:33.618 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.618 11:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:33.877 11:17:02 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:29:33.877 11:17:02 -- common/autotest_common.sh@906 -- # sleep 1 00:29:34.134 [2024-04-18 11:17:02.689305] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:34.134 [2024-04-18 11:17:02.689345] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:34.134 [2024-04-18 11:17:02.689383] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:34.392 [2024-04-18 11:17:02.775494] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:34.392 [2024-04-18 11:17:02.831621] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:34.392 [2024-04-18 11:17:02.831664] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:34.649 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.649 11:17:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:34.649 11:17:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:34.649 11:17:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.649 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.649 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:34.649 11:17:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.649 11:17:03 -- host/discovery.sh@59 -- # sort 00:29:34.649 11:17:03 -- host/discovery.sh@59 -- # xargs 00:29:34.649 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.906 11:17:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.906 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:34.906 11:17:03 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:34.906 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:34.906 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.906 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.906 11:17:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:34.906 11:17:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:34.906 11:17:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.906 11:17:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.906 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.906 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:34.906 11:17:03 -- host/discovery.sh@55 -- # sort 00:29:34.906 11:17:03 -- host/discovery.sh@55 -- # xargs 00:29:34.906 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.906 11:17:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:34.906 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:34.906 11:17:03 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:34.906 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:34.906 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.906 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.906 11:17:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:34.906 11:17:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:34.906 11:17:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:34.906 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.906 11:17:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:34.906 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:34.906 11:17:03 -- host/discovery.sh@63 -- # sort -n 00:29:34.906 11:17:03 -- host/discovery.sh@63 -- # xargs 00:29:34.906 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.906 11:17:03 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:29:34.906 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:34.906 11:17:03 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:34.907 11:17:03 -- host/discovery.sh@79 -- # expected_count=1 00:29:34.907 11:17:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:34.907 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:34.907 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.907 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.907 11:17:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:34.907 11:17:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:34.907 11:17:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:34.907 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.907 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:34.907 11:17:03 -- host/discovery.sh@74 -- # jq '. | length' 00:29:34.907 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.907 11:17:03 -- host/discovery.sh@74 -- # notification_count=1 00:29:34.907 11:17:03 -- host/discovery.sh@75 -- # notify_id=1 00:29:34.907 11:17:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:34.907 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:34.907 11:17:03 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:34.907 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.907 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:34.907 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.907 11:17:03 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:34.907 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:34.907 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:34.907 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:34.907 11:17:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:34.907 11:17:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:34.907 11:17:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.907 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.907 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:34.907 11:17:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.907 11:17:03 -- host/discovery.sh@55 -- # sort 00:29:34.907 11:17:03 -- host/discovery.sh@55 -- # xargs 00:29:35.164 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.164 11:17:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:35.164 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:35.164 11:17:03 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:35.164 11:17:03 -- host/discovery.sh@79 -- # expected_count=1 00:29:35.164 11:17:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:35.164 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:35.164 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:35.164 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:35.164 11:17:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:35.164 11:17:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:35.164 11:17:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:35.164 11:17:03 -- host/discovery.sh@74 -- # jq '. | length' 00:29:35.164 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.164 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:35.164 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.164 11:17:03 -- host/discovery.sh@74 -- # notification_count=1 00:29:35.164 11:17:03 -- host/discovery.sh@75 -- # notify_id=2 00:29:35.164 11:17:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:35.164 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:35.165 11:17:03 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:35.165 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.165 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:35.165 [2024-04-18 11:17:03.625972] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:35.165 [2024-04-18 11:17:03.626450] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:35.165 [2024-04-18 11:17:03.626500] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:35.165 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.165 11:17:03 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:35.165 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:35.165 11:17:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:35.165 11:17:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:35.165 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.165 11:17:03 -- host/discovery.sh@59 -- # xargs 00:29:35.165 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:35.165 11:17:03 -- host/discovery.sh@59 -- # sort 00:29:35.165 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.165 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:35.165 11:17:03 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:35.165 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:35.165 11:17:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.165 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.165 11:17:03 -- host/discovery.sh@55 -- # sort 00:29:35.165 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:35.165 11:17:03 -- host/discovery.sh@55 -- # xargs 00:29:35.165 11:17:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:35.165 [2024-04-18 11:17:03.712514] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:35.165 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:35.165 11:17:03 -- common/autotest_common.sh@904 -- # return 0 00:29:35.165 11:17:03 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@901 -- # local max=10 00:29:35.165 11:17:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:35.165 11:17:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:35.165 11:17:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:35.165 11:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:35.165 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:35.165 11:17:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:35.165 11:17:03 -- host/discovery.sh@63 -- # sort -n 00:29:35.165 11:17:03 -- host/discovery.sh@63 -- # xargs 00:29:35.165 [2024-04-18 11:17:03.771811] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:35.165 [2024-04-18 11:17:03.771839] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:35.165 [2024-04-18 11:17:03.771847] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:35.165 11:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:35.423 11:17:03 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:35.423 11:17:03 -- common/autotest_common.sh@906 -- # sleep 1 00:29:36.358 11:17:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:36.358 11:17:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:36.358 11:17:04 -- host/discovery.sh@63 -- # xargs 00:29:36.358 11:17:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:36.358 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.358 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:29:36.358 11:17:04 -- host/discovery.sh@63 -- # sort -n 00:29:36.358 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:36.358 11:17:04 -- common/autotest_common.sh@904 -- # return 0 00:29:36.358 11:17:04 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:36.358 11:17:04 -- host/discovery.sh@79 -- # expected_count=0 00:29:36.358 11:17:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:36.358 11:17:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:36.358 11:17:04 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.358 11:17:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:36.358 11:17:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:36.358 11:17:04 -- host/discovery.sh@74 -- # jq '. | length' 00:29:36.358 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.358 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:29:36.358 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.358 11:17:04 -- host/discovery.sh@74 -- # notification_count=0 00:29:36.358 11:17:04 -- host/discovery.sh@75 -- # notify_id=2 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:36.358 11:17:04 -- common/autotest_common.sh@904 -- # return 0 00:29:36.358 11:17:04 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:36.358 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.358 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:29:36.358 [2024-04-18 11:17:04.927331] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:36.358 [2024-04-18 11:17:04.927370] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:36.358 [2024-04-18 11:17:04.929092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.358 [2024-04-18 11:17:04.929143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.358 [2024-04-18 11:17:04.929158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.358 [2024-04-18 11:17:04.929169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.358 [2024-04-18 11:17:04.929179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.358 [2024-04-18 11:17:04.929188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.358 [2024-04-18 11:17:04.929198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.358 [2024-04-18 11:17:04.929207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.358 [2024-04-18 11:17:04.929216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.358 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.358 11:17:04 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:36.358 11:17:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:36.358 11:17:04 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.358 11:17:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:36.358 11:17:04 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:36.359 11:17:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.359 11:17:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.359 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.359 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:29:36.359 11:17:04 -- host/discovery.sh@59 -- # xargs 00:29:36.359 11:17:04 -- host/discovery.sh@59 -- # sort 00:29:36.359 [2024-04-18 11:17:04.939028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.359 11:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.359 [2024-04-18 11:17:04.949059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.359 [2024-04-18 11:17:04.949202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.949255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.949271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188920 with addr=10.0.0.2, port=4420 00:29:36.359 [2024-04-18 11:17:04.949283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.359 [2024-04-18 11:17:04.949300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.359 [2024-04-18 11:17:04.949329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.359 [2024-04-18 11:17:04.949340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.359 [2024-04-18 11:17:04.949351] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.359 [2024-04-18 11:17:04.949367] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.359 [2024-04-18 11:17:04.959133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.359 [2024-04-18 11:17:04.959236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.959283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.959299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188920 with addr=10.0.0.2, port=4420 00:29:36.359 [2024-04-18 11:17:04.959310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.359 [2024-04-18 11:17:04.959326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.359 [2024-04-18 11:17:04.959351] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.359 [2024-04-18 11:17:04.959362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.359 [2024-04-18 11:17:04.959372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.359 [2024-04-18 11:17:04.959387] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.359 [2024-04-18 11:17:04.969188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.359 [2024-04-18 11:17:04.969274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.969319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.969335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188920 with addr=10.0.0.2, port=4420 00:29:36.359 [2024-04-18 11:17:04.969346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.359 [2024-04-18 11:17:04.969361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.359 [2024-04-18 11:17:04.969444] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.359 [2024-04-18 11:17:04.969458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.359 [2024-04-18 11:17:04.969467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.359 [2024-04-18 11:17:04.969482] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.359 [2024-04-18 11:17:04.979247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.359 [2024-04-18 11:17:04.979336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.979384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.979400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188920 with addr=10.0.0.2, port=4420 00:29:36.359 [2024-04-18 11:17:04.979410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.359 [2024-04-18 11:17:04.979426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.359 [2024-04-18 11:17:04.979450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.359 [2024-04-18 11:17:04.979461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.359 [2024-04-18 11:17:04.979470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.359 [2024-04-18 11:17:04.979484] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.359 11:17:04 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.359 11:17:04 -- common/autotest_common.sh@904 -- # return 0 00:29:36.359 11:17:04 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:36.359 11:17:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:36.359 11:17:04 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.359 [2024-04-18 11:17:04.989302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.359 [2024-04-18 11:17:04.989374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.989420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.359 [2024-04-18 11:17:04.989435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188920 with addr=10.0.0.2, port=4420 00:29:36.359 [2024-04-18 11:17:04.989446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.359 [2024-04-18 11:17:04.989461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.359 [2024-04-18 11:17:04.989486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.359 [2024-04-18 11:17:04.989497] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.359 [2024-04-18 11:17:04.989507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.359 [2024-04-18 11:17:04.989521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.359 11:17:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.359 11:17:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:36.359 11:17:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:36.359 11:17:04 -- host/discovery.sh@55 -- # sort 00:29:36.359 11:17:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.359 11:17:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.359 11:17:04 -- host/discovery.sh@55 -- # xargs 00:29:36.359 11:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.359 11:17:04 -- common/autotest_common.sh@10 -- # set +x 00:29:36.630 [2024-04-18 11:17:04.999348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.630 [2024-04-18 11:17:04.999445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.630 [2024-04-18 11:17:04.999493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.630 [2024-04-18 11:17:04.999509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188920 with addr=10.0.0.2, port=4420 00:29:36.630 [2024-04-18 11:17:04.999520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.630 [2024-04-18 11:17:04.999536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.630 [2024-04-18 11:17:04.999560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.630 [2024-04-18 11:17:04.999571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.631 [2024-04-18 11:17:04.999580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.631 [2024-04-18 11:17:04.999595] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.631 [2024-04-18 11:17:05.009412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.631 [2024-04-18 11:17:05.009512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.631 [2024-04-18 11:17:05.009567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.631 [2024-04-18 11:17:05.009584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188920 with addr=10.0.0.2, port=4420 00:29:36.631 [2024-04-18 11:17:05.009594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188920 is same with the state(5) to be set 00:29:36.631 [2024-04-18 11:17:05.009611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188920 (9): Bad file descriptor 00:29:36.631 [2024-04-18 11:17:05.009625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.631 [2024-04-18 11:17:05.009634] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.631 [2024-04-18 11:17:05.009643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.631 [2024-04-18 11:17:05.009658] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.631 [2024-04-18 11:17:05.013732] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:36.631 [2024-04-18 11:17:05.013764] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:36.631 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:36.631 11:17:05 -- common/autotest_common.sh@904 -- # return 0 00:29:36.631 11:17:05 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.631 11:17:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:36.631 11:17:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:36.631 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.631 11:17:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:36.631 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:36.631 11:17:05 -- host/discovery.sh@63 -- # sort -n 00:29:36.631 11:17:05 -- host/discovery.sh@63 -- # xargs 00:29:36.631 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:29:36.631 11:17:05 -- common/autotest_common.sh@904 -- # return 0 00:29:36.631 11:17:05 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:36.631 11:17:05 -- host/discovery.sh@79 -- # expected_count=0 00:29:36.631 11:17:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:36.631 11:17:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:36.631 11:17:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.631 11:17:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:36.631 11:17:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:36.631 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.631 11:17:05 -- host/discovery.sh@74 -- # jq '. | length' 00:29:36.631 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:36.631 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.631 11:17:05 -- host/discovery.sh@74 -- # notification_count=0 00:29:36.631 11:17:05 -- host/discovery.sh@75 -- # notify_id=2 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:36.631 11:17:05 -- common/autotest_common.sh@904 -- # return 0 00:29:36.631 11:17:05 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:36.631 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.631 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:36.631 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.631 11:17:05 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.631 11:17:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:36.631 11:17:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.631 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.631 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:36.631 11:17:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.631 11:17:05 -- host/discovery.sh@59 -- # sort 00:29:36.631 11:17:05 -- host/discovery.sh@59 -- # xargs 00:29:36.631 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:36.631 11:17:05 -- common/autotest_common.sh@904 -- # return 0 00:29:36.631 11:17:05 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.631 11:17:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:36.631 11:17:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:36.631 11:17:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.631 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.631 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:36.631 11:17:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.631 11:17:05 -- host/discovery.sh@55 -- # xargs 00:29:36.631 11:17:05 -- host/discovery.sh@55 -- # sort 00:29:36.631 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.900 11:17:05 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:36.900 11:17:05 -- common/autotest_common.sh@904 -- # return 0 00:29:36.900 11:17:05 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:36.900 11:17:05 -- host/discovery.sh@79 -- # expected_count=2 00:29:36.900 11:17:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:36.900 11:17:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:36.900 11:17:05 -- common/autotest_common.sh@901 -- # local max=10 00:29:36.900 11:17:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:36.900 11:17:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:36.900 11:17:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:36.900 11:17:05 -- host/discovery.sh@74 -- # jq '. | length' 00:29:36.900 11:17:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:36.900 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.900 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:36.900 11:17:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.900 11:17:05 -- host/discovery.sh@74 -- # notification_count=2 00:29:36.900 11:17:05 -- host/discovery.sh@75 -- # notify_id=4 00:29:36.900 11:17:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:36.900 11:17:05 -- common/autotest_common.sh@904 -- # return 0 00:29:36.900 11:17:05 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:36.900 11:17:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.900 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:37.834 [2024-04-18 11:17:06.369863] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:37.834 [2024-04-18 11:17:06.369908] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:37.834 [2024-04-18 11:17:06.369928] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:37.834 [2024-04-18 11:17:06.455649] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:38.092 [2024-04-18 11:17:06.515408] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:38.092 [2024-04-18 11:17:06.515486] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:38.092 11:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.092 11:17:06 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.092 11:17:06 -- common/autotest_common.sh@638 -- # local es=0 00:29:38.092 11:17:06 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.092 11:17:06 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:38.092 11:17:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:38.092 11:17:06 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:38.092 11:17:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:38.092 11:17:06 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.092 11:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.092 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:38.092 2024/04/18 11:17:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:29:38.092 request: 00:29:38.092 { 00:29:38.092 "method": "bdev_nvme_start_discovery", 00:29:38.092 "params": { 00:29:38.092 "name": "nvme", 00:29:38.092 "trtype": "tcp", 00:29:38.092 "traddr": "10.0.0.2", 00:29:38.092 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:38.092 "adrfam": "ipv4", 00:29:38.092 "trsvcid": "8009", 00:29:38.092 "wait_for_attach": true 00:29:38.092 } 00:29:38.092 } 00:29:38.092 Got JSON-RPC error response 00:29:38.092 GoRPCClient: error on JSON-RPC call 00:29:38.092 11:17:06 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:38.092 11:17:06 -- common/autotest_common.sh@641 -- # es=1 00:29:38.092 11:17:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:38.092 11:17:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:38.092 11:17:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:38.092 11:17:06 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:38.092 11:17:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:38.092 11:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.092 11:17:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:38.093 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:38.093 11:17:06 -- host/discovery.sh@67 -- # sort 00:29:38.093 11:17:06 -- host/discovery.sh@67 -- # xargs 00:29:38.093 11:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.093 11:17:06 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:38.093 11:17:06 -- host/discovery.sh@146 -- # get_bdev_list 00:29:38.093 11:17:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:38.093 11:17:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:38.093 11:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.093 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:38.093 11:17:06 -- host/discovery.sh@55 -- # sort 00:29:38.093 11:17:06 -- host/discovery.sh@55 -- # xargs 00:29:38.093 11:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.093 11:17:06 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:38.093 11:17:06 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.093 11:17:06 -- common/autotest_common.sh@638 -- # local es=0 00:29:38.093 11:17:06 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.093 11:17:06 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:38.093 11:17:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:38.093 11:17:06 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:38.093 11:17:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:38.093 11:17:06 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.093 11:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.093 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:38.093 2024/04/18 11:17:06 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:29:38.093 request: 00:29:38.093 { 00:29:38.093 "method": "bdev_nvme_start_discovery", 00:29:38.093 "params": { 00:29:38.093 "name": "nvme_second", 00:29:38.093 "trtype": "tcp", 00:29:38.093 "traddr": "10.0.0.2", 00:29:38.093 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:38.093 "adrfam": "ipv4", 00:29:38.093 "trsvcid": "8009", 00:29:38.093 "wait_for_attach": true 00:29:38.093 } 00:29:38.093 } 00:29:38.093 Got JSON-RPC error response 00:29:38.093 GoRPCClient: error on JSON-RPC call 00:29:38.093 11:17:06 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:38.093 11:17:06 -- common/autotest_common.sh@641 -- # es=1 00:29:38.093 11:17:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:38.093 11:17:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:38.093 11:17:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:38.093 11:17:06 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:38.093 11:17:06 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:38.093 11:17:06 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:38.093 11:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.093 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:38.093 11:17:06 -- host/discovery.sh@67 -- # sort 00:29:38.093 11:17:06 -- host/discovery.sh@67 -- # xargs 00:29:38.093 11:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.093 11:17:06 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:38.352 11:17:06 -- host/discovery.sh@152 -- # get_bdev_list 00:29:38.352 11:17:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:38.352 11:17:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:38.352 11:17:06 -- host/discovery.sh@55 -- # sort 00:29:38.352 11:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.352 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:38.352 11:17:06 -- host/discovery.sh@55 -- # xargs 00:29:38.352 11:17:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.352 11:17:06 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:38.352 11:17:06 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:38.352 11:17:06 -- common/autotest_common.sh@638 -- # local es=0 00:29:38.352 11:17:06 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:38.352 11:17:06 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:38.352 11:17:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:38.352 11:17:06 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:38.352 11:17:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:38.352 11:17:06 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:38.352 11:17:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.352 11:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:39.286 [2024-04-18 11:17:07.807887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.286 [2024-04-18 11:17:07.808005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.286 [2024-04-18 11:17:07.808025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2187c20 with addr=10.0.0.2, port=8010 00:29:39.286 [2024-04-18 11:17:07.808069] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:39.286 [2024-04-18 11:17:07.808084] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:39.286 [2024-04-18 11:17:07.808094] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:40.220 [2024-04-18 11:17:08.807880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.220 [2024-04-18 11:17:08.808004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.220 [2024-04-18 11:17:08.808024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2187c20 with addr=10.0.0.2, port=8010 00:29:40.220 [2024-04-18 11:17:08.808069] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:40.220 [2024-04-18 11:17:08.808081] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:40.220 [2024-04-18 11:17:08.808092] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:41.595 [2024-04-18 11:17:09.807711] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:41.595 2024/04/18 11:17:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:29:41.595 request: 00:29:41.595 { 00:29:41.595 "method": "bdev_nvme_start_discovery", 00:29:41.595 "params": { 00:29:41.595 "name": "nvme_second", 00:29:41.595 "trtype": "tcp", 00:29:41.595 "traddr": "10.0.0.2", 00:29:41.595 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:41.595 "adrfam": "ipv4", 00:29:41.595 "trsvcid": "8010", 00:29:41.595 "attach_timeout_ms": 3000 00:29:41.595 } 00:29:41.595 } 00:29:41.595 Got JSON-RPC error response 00:29:41.595 GoRPCClient: error on JSON-RPC call 00:29:41.595 11:17:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:41.595 11:17:09 -- common/autotest_common.sh@641 -- # es=1 00:29:41.595 11:17:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:41.595 11:17:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:41.595 11:17:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:41.595 11:17:09 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:41.595 11:17:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:41.595 11:17:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:41.595 11:17:09 -- common/autotest_common.sh@10 -- # set +x 00:29:41.595 11:17:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:41.595 11:17:09 -- host/discovery.sh@67 -- # sort 00:29:41.595 11:17:09 -- host/discovery.sh@67 -- # xargs 00:29:41.595 11:17:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.595 11:17:09 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:41.595 11:17:09 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:41.595 11:17:09 -- host/discovery.sh@161 -- # kill 101432 00:29:41.595 11:17:09 -- host/discovery.sh@162 -- # nvmftestfini 00:29:41.595 11:17:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:41.595 11:17:09 -- nvmf/common.sh@117 -- # sync 00:29:41.595 11:17:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:41.595 11:17:09 -- nvmf/common.sh@120 -- # set +e 00:29:41.595 11:17:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.595 11:17:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:41.595 rmmod nvme_tcp 00:29:41.595 rmmod nvme_fabrics 00:29:41.595 rmmod nvme_keyring 00:29:41.595 11:17:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.595 11:17:09 -- nvmf/common.sh@124 -- # set -e 00:29:41.595 11:17:09 -- nvmf/common.sh@125 -- # return 0 00:29:41.595 11:17:09 -- nvmf/common.sh@478 -- # '[' -n 101382 ']' 00:29:41.595 11:17:09 -- nvmf/common.sh@479 -- # killprocess 101382 00:29:41.595 11:17:09 -- common/autotest_common.sh@936 -- # '[' -z 101382 ']' 00:29:41.595 11:17:09 -- common/autotest_common.sh@940 -- # kill -0 101382 00:29:41.595 11:17:09 -- common/autotest_common.sh@941 -- # uname 00:29:41.595 11:17:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.595 11:17:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101382 00:29:41.595 killing process with pid 101382 00:29:41.595 11:17:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:41.595 11:17:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:41.595 11:17:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101382' 00:29:41.595 11:17:09 -- common/autotest_common.sh@955 -- # kill 101382 00:29:41.595 11:17:09 -- common/autotest_common.sh@960 -- # wait 101382 00:29:41.595 11:17:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:41.595 11:17:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:41.595 11:17:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:41.595 11:17:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:41.595 11:17:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:41.595 11:17:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.595 11:17:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.596 11:17:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.596 11:17:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:41.596 00:29:41.596 real 0m11.147s 00:29:41.596 user 0m22.061s 00:29:41.596 sys 0m1.665s 00:29:41.596 11:17:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:41.596 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:29:41.596 ************************************ 00:29:41.596 END TEST nvmf_discovery 00:29:41.596 ************************************ 00:29:41.854 11:17:10 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:41.854 11:17:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:41.854 11:17:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:41.854 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:29:41.854 ************************************ 00:29:41.854 START TEST nvmf_discovery_remove_ifc 00:29:41.854 ************************************ 00:29:41.854 11:17:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:41.854 * Looking for test storage... 00:29:41.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:41.854 11:17:10 -- nvmf/common.sh@7 -- # uname -s 00:29:41.854 11:17:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.854 11:17:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.854 11:17:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.854 11:17:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.854 11:17:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.854 11:17:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.854 11:17:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.854 11:17:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.854 11:17:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.854 11:17:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.854 11:17:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:41.854 11:17:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:41.854 11:17:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.854 11:17:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.854 11:17:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:41.854 11:17:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.854 11:17:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:41.854 11:17:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.854 11:17:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.854 11:17:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.854 11:17:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.854 11:17:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.854 11:17:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.854 11:17:10 -- paths/export.sh@5 -- # export PATH 00:29:41.854 11:17:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.854 11:17:10 -- nvmf/common.sh@47 -- # : 0 00:29:41.854 11:17:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.854 11:17:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.854 11:17:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.854 11:17:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.854 11:17:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.854 11:17:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.854 11:17:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.854 11:17:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:41.854 11:17:10 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:41.854 11:17:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:41.854 11:17:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.854 11:17:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:41.854 11:17:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:41.854 11:17:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:41.854 11:17:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.854 11:17:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.854 11:17:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.854 11:17:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:41.854 11:17:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:41.854 11:17:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:41.854 11:17:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:41.854 11:17:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:41.854 11:17:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:41.854 11:17:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.854 11:17:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.854 11:17:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:41.854 11:17:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:41.854 11:17:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:41.854 11:17:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:41.854 11:17:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:41.854 11:17:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.854 11:17:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:41.855 11:17:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:41.855 11:17:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:41.855 11:17:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:41.855 11:17:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:41.855 11:17:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:41.855 Cannot find device "nvmf_tgt_br" 00:29:41.855 11:17:10 -- nvmf/common.sh@155 -- # true 00:29:41.855 11:17:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:41.855 Cannot find device "nvmf_tgt_br2" 00:29:41.855 11:17:10 -- nvmf/common.sh@156 -- # true 00:29:41.855 11:17:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:42.113 11:17:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:42.113 Cannot find device "nvmf_tgt_br" 00:29:42.113 11:17:10 -- nvmf/common.sh@158 -- # true 00:29:42.113 11:17:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:42.113 Cannot find device "nvmf_tgt_br2" 00:29:42.113 11:17:10 -- nvmf/common.sh@159 -- # true 00:29:42.113 11:17:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:42.113 11:17:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:42.113 11:17:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:42.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:42.113 11:17:10 -- nvmf/common.sh@162 -- # true 00:29:42.113 11:17:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:42.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:42.113 11:17:10 -- nvmf/common.sh@163 -- # true 00:29:42.113 11:17:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:42.113 11:17:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:42.113 11:17:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:42.113 11:17:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:42.113 11:17:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:42.113 11:17:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:42.113 11:17:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:42.113 11:17:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:42.113 11:17:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:42.113 11:17:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:42.113 11:17:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:42.113 11:17:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:42.113 11:17:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:42.113 11:17:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:42.114 11:17:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:42.114 11:17:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:42.114 11:17:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:42.114 11:17:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:42.114 11:17:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:42.114 11:17:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:42.114 11:17:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:42.114 11:17:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:42.114 11:17:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:42.114 11:17:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:42.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:29:42.114 00:29:42.114 --- 10.0.0.2 ping statistics --- 00:29:42.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.114 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:42.114 11:17:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:42.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:42.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:29:42.114 00:29:42.114 --- 10.0.0.3 ping statistics --- 00:29:42.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.114 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:29:42.114 11:17:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:42.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:29:42.373 00:29:42.373 --- 10.0.0.1 ping statistics --- 00:29:42.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.373 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:29:42.373 11:17:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.373 11:17:10 -- nvmf/common.sh@422 -- # return 0 00:29:42.373 11:17:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:42.373 11:17:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.373 11:17:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:42.373 11:17:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:42.373 11:17:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.373 11:17:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:42.373 11:17:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:42.373 11:17:10 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:42.373 11:17:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:42.373 11:17:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:42.373 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:29:42.373 11:17:10 -- nvmf/common.sh@470 -- # nvmfpid=101916 00:29:42.373 11:17:10 -- nvmf/common.sh@471 -- # waitforlisten 101916 00:29:42.373 11:17:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:42.373 11:17:10 -- common/autotest_common.sh@817 -- # '[' -z 101916 ']' 00:29:42.373 11:17:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.373 11:17:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:42.373 11:17:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.373 11:17:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:42.373 11:17:10 -- common/autotest_common.sh@10 -- # set +x 00:29:42.373 [2024-04-18 11:17:10.838356] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:42.373 [2024-04-18 11:17:10.838461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.373 [2024-04-18 11:17:10.984053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.631 [2024-04-18 11:17:11.082216] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.631 [2024-04-18 11:17:11.082279] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.631 [2024-04-18 11:17:11.082293] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.631 [2024-04-18 11:17:11.082304] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.631 [2024-04-18 11:17:11.082313] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.631 [2024-04-18 11:17:11.082357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.198 11:17:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:43.198 11:17:11 -- common/autotest_common.sh@850 -- # return 0 00:29:43.198 11:17:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:43.198 11:17:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:43.198 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:29:43.198 11:17:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.198 11:17:11 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:43.198 11:17:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:43.198 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:29:43.198 [2024-04-18 11:17:11.830916] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.198 [2024-04-18 11:17:11.839053] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:43.457 null0 00:29:43.457 [2024-04-18 11:17:11.871028] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.457 11:17:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:43.457 11:17:11 -- host/discovery_remove_ifc.sh@59 -- # hostpid=101965 00:29:43.457 11:17:11 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:43.457 11:17:11 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 101965 /tmp/host.sock 00:29:43.457 11:17:11 -- common/autotest_common.sh@817 -- # '[' -z 101965 ']' 00:29:43.457 11:17:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:43.457 11:17:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:43.457 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:43.457 11:17:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:43.457 11:17:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:43.457 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:29:43.457 [2024-04-18 11:17:11.960808] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:43.457 [2024-04-18 11:17:11.960980] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101965 ] 00:29:43.715 [2024-04-18 11:17:12.106868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.715 [2024-04-18 11:17:12.214796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.282 11:17:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:44.282 11:17:12 -- common/autotest_common.sh@850 -- # return 0 00:29:44.282 11:17:12 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:44.282 11:17:12 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:44.282 11:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.282 11:17:12 -- common/autotest_common.sh@10 -- # set +x 00:29:44.282 11:17:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.282 11:17:12 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:44.282 11:17:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.282 11:17:12 -- common/autotest_common.sh@10 -- # set +x 00:29:44.541 11:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:44.541 11:17:13 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:44.541 11:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:44.541 11:17:13 -- common/autotest_common.sh@10 -- # set +x 00:29:45.476 [2024-04-18 11:17:14.024310] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:45.476 [2024-04-18 11:17:14.024347] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:45.476 [2024-04-18 11:17:14.024366] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:45.476 [2024-04-18 11:17:14.110444] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:45.735 [2024-04-18 11:17:14.166522] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:45.735 [2024-04-18 11:17:14.166606] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:45.735 [2024-04-18 11:17:14.166636] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:45.735 [2024-04-18 11:17:14.166654] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:45.735 [2024-04-18 11:17:14.166682] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:45.735 11:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:45.735 [2024-04-18 11:17:14.172690] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb42dd0 was disconnected and freed. delete nvme_qpair. 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:45.735 11:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.735 11:17:14 -- common/autotest_common.sh@10 -- # set +x 00:29:45.735 11:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.735 11:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:45.735 11:17:14 -- common/autotest_common.sh@10 -- # set +x 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:45.735 11:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:45.735 11:17:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:46.669 11:17:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:46.669 11:17:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:46.669 11:17:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:46.669 11:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.669 11:17:15 -- common/autotest_common.sh@10 -- # set +x 00:29:46.669 11:17:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:46.669 11:17:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:46.927 11:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.927 11:17:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:46.927 11:17:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:47.863 11:17:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:47.863 11:17:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:47.863 11:17:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:47.863 11:17:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.863 11:17:16 -- common/autotest_common.sh@10 -- # set +x 00:29:47.863 11:17:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:47.863 11:17:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:47.863 11:17:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.863 11:17:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:47.863 11:17:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:48.799 11:17:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:48.799 11:17:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.799 11:17:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:48.799 11:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.799 11:17:17 -- common/autotest_common.sh@10 -- # set +x 00:29:48.799 11:17:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:48.799 11:17:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:49.057 11:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.057 11:17:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:49.057 11:17:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:49.993 11:17:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:49.993 11:17:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:49.993 11:17:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:49.993 11:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:49.993 11:17:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:49.993 11:17:18 -- common/autotest_common.sh@10 -- # set +x 00:29:49.993 11:17:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:49.993 11:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:49.993 11:17:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:49.993 11:17:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:50.927 11:17:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:50.927 11:17:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.927 11:17:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:50.927 11:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.927 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:29:50.927 11:17:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:50.927 11:17:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:50.927 11:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.186 11:17:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:51.186 11:17:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:51.186 [2024-04-18 11:17:19.594318] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:51.186 [2024-04-18 11:17:19.594395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.186 [2024-04-18 11:17:19.594411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.186 [2024-04-18 11:17:19.594425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.186 [2024-04-18 11:17:19.594434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.186 [2024-04-18 11:17:19.594445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.186 [2024-04-18 11:17:19.594454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.186 [2024-04-18 11:17:19.594464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.186 [2024-04-18 11:17:19.594472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.186 [2024-04-18 11:17:19.594482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.186 [2024-04-18 11:17:19.594492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.186 [2024-04-18 11:17:19.594501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb09400 is same with the state(5) to be set 00:29:51.186 [2024-04-18 11:17:19.604314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09400 (9): Bad file descriptor 00:29:51.186 [2024-04-18 11:17:19.614346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:52.121 11:17:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:52.121 11:17:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.121 11:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:52.121 11:17:20 -- common/autotest_common.sh@10 -- # set +x 00:29:52.121 11:17:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:52.121 11:17:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:52.121 11:17:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:52.121 [2024-04-18 11:17:20.626148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:53.056 [2024-04-18 11:17:21.650171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:53.056 [2024-04-18 11:17:21.650312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb09400 with addr=10.0.0.2, port=4420 00:29:53.056 [2024-04-18 11:17:21.650349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb09400 is same with the state(5) to be set 00:29:53.056 [2024-04-18 11:17:21.651283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09400 (9): Bad file descriptor 00:29:53.056 [2024-04-18 11:17:21.651376] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.056 [2024-04-18 11:17:21.651431] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:53.056 [2024-04-18 11:17:21.651510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.056 [2024-04-18 11:17:21.651541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.056 [2024-04-18 11:17:21.651568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.056 [2024-04-18 11:17:21.651590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.056 [2024-04-18 11:17:21.651611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.056 [2024-04-18 11:17:21.651632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.056 [2024-04-18 11:17:21.651654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.056 [2024-04-18 11:17:21.651673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.056 [2024-04-18 11:17:21.651695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.056 [2024-04-18 11:17:21.651715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.056 [2024-04-18 11:17:21.651736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:53.056 [2024-04-18 11:17:21.651796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09810 (9): Bad file descriptor 00:29:53.056 [2024-04-18 11:17:21.652797] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:53.056 [2024-04-18 11:17:21.652854] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:53.056 11:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.056 11:17:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:53.057 11:17:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.475 11:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.475 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:54.475 11:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:54.475 11:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.475 11:17:22 -- common/autotest_common.sh@10 -- # set +x 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:54.475 11:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:54.475 11:17:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:55.039 [2024-04-18 11:17:23.655804] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:55.039 [2024-04-18 11:17:23.655853] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:55.039 [2024-04-18 11:17:23.655873] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:55.297 [2024-04-18 11:17:23.741941] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:55.297 [2024-04-18 11:17:23.797952] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:55.297 [2024-04-18 11:17:23.798026] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:55.297 [2024-04-18 11:17:23.798069] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:55.297 [2024-04-18 11:17:23.798087] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:55.297 [2024-04-18 11:17:23.798098] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.297 [2024-04-18 11:17:23.804350] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb162d0 was disconnected and freed. delete nvme_qpair. 00:29:55.297 11:17:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:55.297 11:17:23 -- common/autotest_common.sh@10 -- # set +x 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:55.297 11:17:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:55.297 11:17:23 -- host/discovery_remove_ifc.sh@90 -- # killprocess 101965 00:29:55.297 11:17:23 -- common/autotest_common.sh@936 -- # '[' -z 101965 ']' 00:29:55.297 11:17:23 -- common/autotest_common.sh@940 -- # kill -0 101965 00:29:55.297 11:17:23 -- common/autotest_common.sh@941 -- # uname 00:29:55.297 11:17:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:55.297 11:17:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101965 00:29:55.297 killing process with pid 101965 00:29:55.297 11:17:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:55.297 11:17:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:55.297 11:17:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101965' 00:29:55.297 11:17:23 -- common/autotest_common.sh@955 -- # kill 101965 00:29:55.297 11:17:23 -- common/autotest_common.sh@960 -- # wait 101965 00:29:55.556 11:17:24 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:55.556 11:17:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:55.556 11:17:24 -- nvmf/common.sh@117 -- # sync 00:29:55.556 11:17:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:55.556 11:17:24 -- nvmf/common.sh@120 -- # set +e 00:29:55.556 11:17:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:55.556 11:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:55.556 rmmod nvme_tcp 00:29:55.556 rmmod nvme_fabrics 00:29:55.556 rmmod nvme_keyring 00:29:55.556 11:17:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:55.556 11:17:24 -- nvmf/common.sh@124 -- # set -e 00:29:55.556 11:17:24 -- nvmf/common.sh@125 -- # return 0 00:29:55.556 11:17:24 -- nvmf/common.sh@478 -- # '[' -n 101916 ']' 00:29:55.556 11:17:24 -- nvmf/common.sh@479 -- # killprocess 101916 00:29:55.556 11:17:24 -- common/autotest_common.sh@936 -- # '[' -z 101916 ']' 00:29:55.556 11:17:24 -- common/autotest_common.sh@940 -- # kill -0 101916 00:29:55.814 11:17:24 -- common/autotest_common.sh@941 -- # uname 00:29:55.814 11:17:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:55.814 11:17:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101916 00:29:55.814 killing process with pid 101916 00:29:55.814 11:17:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:55.814 11:17:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:55.814 11:17:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101916' 00:29:55.814 11:17:24 -- common/autotest_common.sh@955 -- # kill 101916 00:29:55.814 11:17:24 -- common/autotest_common.sh@960 -- # wait 101916 00:29:55.814 11:17:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:55.814 11:17:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:55.814 11:17:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:55.814 11:17:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.814 11:17:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.814 11:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.814 11:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.814 11:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.073 11:17:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:56.073 00:29:56.073 real 0m14.132s 00:29:56.073 user 0m24.294s 00:29:56.073 sys 0m1.563s 00:29:56.073 11:17:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:56.073 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:29:56.073 ************************************ 00:29:56.073 END TEST nvmf_discovery_remove_ifc 00:29:56.073 ************************************ 00:29:56.073 11:17:24 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:56.073 11:17:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:56.073 11:17:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:56.073 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:29:56.073 ************************************ 00:29:56.073 START TEST nvmf_identify_kernel_target 00:29:56.073 ************************************ 00:29:56.073 11:17:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:56.073 * Looking for test storage... 00:29:56.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:56.073 11:17:24 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:56.073 11:17:24 -- nvmf/common.sh@7 -- # uname -s 00:29:56.073 11:17:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.073 11:17:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.073 11:17:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.073 11:17:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.073 11:17:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.073 11:17:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.073 11:17:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.073 11:17:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.073 11:17:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.073 11:17:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.073 11:17:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:56.073 11:17:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:56.073 11:17:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.073 11:17:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.073 11:17:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:56.073 11:17:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.073 11:17:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:56.073 11:17:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.073 11:17:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.073 11:17:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.073 11:17:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.073 11:17:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.073 11:17:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.073 11:17:24 -- paths/export.sh@5 -- # export PATH 00:29:56.073 11:17:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.073 11:17:24 -- nvmf/common.sh@47 -- # : 0 00:29:56.073 11:17:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.073 11:17:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.073 11:17:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.073 11:17:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.073 11:17:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.073 11:17:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.073 11:17:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.073 11:17:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.073 11:17:24 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:56.073 11:17:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:56.073 11:17:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.073 11:17:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:56.073 11:17:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:56.073 11:17:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:56.073 11:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.073 11:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.073 11:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.073 11:17:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:56.073 11:17:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:56.073 11:17:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:56.073 11:17:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:56.073 11:17:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:56.073 11:17:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:56.073 11:17:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.073 11:17:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.073 11:17:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:56.073 11:17:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:56.073 11:17:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:56.073 11:17:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:56.073 11:17:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:56.073 11:17:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.073 11:17:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:56.073 11:17:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:56.073 11:17:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:56.073 11:17:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:56.073 11:17:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:56.332 11:17:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:56.332 Cannot find device "nvmf_tgt_br" 00:29:56.332 11:17:24 -- nvmf/common.sh@155 -- # true 00:29:56.332 11:17:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:56.332 Cannot find device "nvmf_tgt_br2" 00:29:56.332 11:17:24 -- nvmf/common.sh@156 -- # true 00:29:56.332 11:17:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:56.332 11:17:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:56.332 Cannot find device "nvmf_tgt_br" 00:29:56.332 11:17:24 -- nvmf/common.sh@158 -- # true 00:29:56.332 11:17:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:56.332 Cannot find device "nvmf_tgt_br2" 00:29:56.332 11:17:24 -- nvmf/common.sh@159 -- # true 00:29:56.332 11:17:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:56.332 11:17:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:56.332 11:17:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:56.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:56.332 11:17:24 -- nvmf/common.sh@162 -- # true 00:29:56.332 11:17:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:56.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:56.332 11:17:24 -- nvmf/common.sh@163 -- # true 00:29:56.332 11:17:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:56.332 11:17:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:56.332 11:17:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:56.332 11:17:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:56.332 11:17:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:56.332 11:17:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:56.332 11:17:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:56.332 11:17:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:56.332 11:17:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:56.332 11:17:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:56.332 11:17:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:56.332 11:17:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:56.332 11:17:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:56.332 11:17:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:56.332 11:17:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:56.332 11:17:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:56.332 11:17:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:56.332 11:17:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:56.332 11:17:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:56.332 11:17:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:56.589 11:17:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:56.589 11:17:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:56.589 11:17:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:56.589 11:17:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:56.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:29:56.589 00:29:56.589 --- 10.0.0.2 ping statistics --- 00:29:56.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.589 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:56.589 11:17:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:56.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:56.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:29:56.589 00:29:56.589 --- 10.0.0.3 ping statistics --- 00:29:56.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.589 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:29:56.589 11:17:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:56.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:29:56.589 00:29:56.589 --- 10.0.0.1 ping statistics --- 00:29:56.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.589 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:29:56.589 11:17:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.589 11:17:25 -- nvmf/common.sh@422 -- # return 0 00:29:56.589 11:17:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:56.589 11:17:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.589 11:17:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:56.589 11:17:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:56.589 11:17:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.589 11:17:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:56.589 11:17:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:56.589 11:17:25 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:56.589 11:17:25 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:56.589 11:17:25 -- nvmf/common.sh@717 -- # local ip 00:29:56.589 11:17:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:56.589 11:17:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:56.589 11:17:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.589 11:17:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.589 11:17:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:56.589 11:17:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.589 11:17:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:56.589 11:17:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:56.589 11:17:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:56.589 11:17:25 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:56.589 11:17:25 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:56.589 11:17:25 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:56.589 11:17:25 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:56.589 11:17:25 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:56.589 11:17:25 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:56.589 11:17:25 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:56.589 11:17:25 -- nvmf/common.sh@628 -- # local block nvme 00:29:56.589 11:17:25 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:56.589 11:17:25 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:56.589 11:17:25 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:56.589 11:17:25 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:56.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:56.847 Waiting for block devices as requested 00:29:56.847 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:57.104 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:57.104 11:17:25 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:57.104 11:17:25 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:57.105 11:17:25 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:57.105 11:17:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:57.105 11:17:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:57.105 11:17:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:57.105 11:17:25 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:57.105 11:17:25 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:57.105 11:17:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:57.105 No valid GPT data, bailing 00:29:57.105 11:17:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:57.105 11:17:25 -- scripts/common.sh@391 -- # pt= 00:29:57.105 11:17:25 -- scripts/common.sh@392 -- # return 1 00:29:57.105 11:17:25 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:57.105 11:17:25 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:57.105 11:17:25 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:57.105 11:17:25 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:29:57.105 11:17:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:29:57.105 11:17:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:57.105 11:17:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:57.105 11:17:25 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:29:57.105 11:17:25 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:29:57.105 11:17:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:57.105 No valid GPT data, bailing 00:29:57.105 11:17:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:57.105 11:17:25 -- scripts/common.sh@391 -- # pt= 00:29:57.105 11:17:25 -- scripts/common.sh@392 -- # return 1 00:29:57.105 11:17:25 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:29:57.105 11:17:25 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:57.105 11:17:25 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:57.105 11:17:25 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:29:57.105 11:17:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:29:57.105 11:17:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:57.105 11:17:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:57.105 11:17:25 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:29:57.105 11:17:25 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:29:57.105 11:17:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:57.363 No valid GPT data, bailing 00:29:57.363 11:17:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:57.363 11:17:25 -- scripts/common.sh@391 -- # pt= 00:29:57.363 11:17:25 -- scripts/common.sh@392 -- # return 1 00:29:57.363 11:17:25 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:29:57.363 11:17:25 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:57.363 11:17:25 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:57.363 11:17:25 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:29:57.363 11:17:25 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:57.363 11:17:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:57.363 11:17:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:57.363 11:17:25 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:29:57.363 11:17:25 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:29:57.363 11:17:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:57.363 No valid GPT data, bailing 00:29:57.363 11:17:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:57.363 11:17:25 -- scripts/common.sh@391 -- # pt= 00:29:57.363 11:17:25 -- scripts/common.sh@392 -- # return 1 00:29:57.363 11:17:25 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:29:57.363 11:17:25 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:29:57.363 11:17:25 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:57.363 11:17:25 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:57.363 11:17:25 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:57.363 11:17:25 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:57.363 11:17:25 -- nvmf/common.sh@656 -- # echo 1 00:29:57.363 11:17:25 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:29:57.363 11:17:25 -- nvmf/common.sh@658 -- # echo 1 00:29:57.363 11:17:25 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:57.363 11:17:25 -- nvmf/common.sh@661 -- # echo tcp 00:29:57.363 11:17:25 -- nvmf/common.sh@662 -- # echo 4420 00:29:57.363 11:17:25 -- nvmf/common.sh@663 -- # echo ipv4 00:29:57.363 11:17:25 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:57.363 11:17:25 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -a 10.0.0.1 -t tcp -s 4420 00:29:57.363 00:29:57.363 Discovery Log Number of Records 2, Generation counter 2 00:29:57.363 =====Discovery Log Entry 0====== 00:29:57.363 trtype: tcp 00:29:57.363 adrfam: ipv4 00:29:57.363 subtype: current discovery subsystem 00:29:57.363 treq: not specified, sq flow control disable supported 00:29:57.363 portid: 1 00:29:57.363 trsvcid: 4420 00:29:57.363 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:57.363 traddr: 10.0.0.1 00:29:57.363 eflags: none 00:29:57.363 sectype: none 00:29:57.363 =====Discovery Log Entry 1====== 00:29:57.363 trtype: tcp 00:29:57.363 adrfam: ipv4 00:29:57.363 subtype: nvme subsystem 00:29:57.363 treq: not specified, sq flow control disable supported 00:29:57.363 portid: 1 00:29:57.363 trsvcid: 4420 00:29:57.363 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:57.363 traddr: 10.0.0.1 00:29:57.363 eflags: none 00:29:57.363 sectype: none 00:29:57.363 11:17:25 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:57.363 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:57.621 ===================================================== 00:29:57.621 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:57.621 ===================================================== 00:29:57.621 Controller Capabilities/Features 00:29:57.621 ================================ 00:29:57.621 Vendor ID: 0000 00:29:57.621 Subsystem Vendor ID: 0000 00:29:57.621 Serial Number: 7495d9cba0f0f5e1785a 00:29:57.621 Model Number: Linux 00:29:57.621 Firmware Version: 6.7.0-68 00:29:57.621 Recommended Arb Burst: 0 00:29:57.621 IEEE OUI Identifier: 00 00 00 00:29:57.621 Multi-path I/O 00:29:57.621 May have multiple subsystem ports: No 00:29:57.621 May have multiple controllers: No 00:29:57.621 Associated with SR-IOV VF: No 00:29:57.621 Max Data Transfer Size: Unlimited 00:29:57.621 Max Number of Namespaces: 0 00:29:57.621 Max Number of I/O Queues: 1024 00:29:57.621 NVMe Specification Version (VS): 1.3 00:29:57.621 NVMe Specification Version (Identify): 1.3 00:29:57.621 Maximum Queue Entries: 1024 00:29:57.621 Contiguous Queues Required: No 00:29:57.621 Arbitration Mechanisms Supported 00:29:57.621 Weighted Round Robin: Not Supported 00:29:57.621 Vendor Specific: Not Supported 00:29:57.621 Reset Timeout: 7500 ms 00:29:57.621 Doorbell Stride: 4 bytes 00:29:57.621 NVM Subsystem Reset: Not Supported 00:29:57.621 Command Sets Supported 00:29:57.621 NVM Command Set: Supported 00:29:57.621 Boot Partition: Not Supported 00:29:57.621 Memory Page Size Minimum: 4096 bytes 00:29:57.621 Memory Page Size Maximum: 4096 bytes 00:29:57.621 Persistent Memory Region: Not Supported 00:29:57.621 Optional Asynchronous Events Supported 00:29:57.621 Namespace Attribute Notices: Not Supported 00:29:57.621 Firmware Activation Notices: Not Supported 00:29:57.621 ANA Change Notices: Not Supported 00:29:57.621 PLE Aggregate Log Change Notices: Not Supported 00:29:57.621 LBA Status Info Alert Notices: Not Supported 00:29:57.621 EGE Aggregate Log Change Notices: Not Supported 00:29:57.621 Normal NVM Subsystem Shutdown event: Not Supported 00:29:57.621 Zone Descriptor Change Notices: Not Supported 00:29:57.621 Discovery Log Change Notices: Supported 00:29:57.621 Controller Attributes 00:29:57.621 128-bit Host Identifier: Not Supported 00:29:57.621 Non-Operational Permissive Mode: Not Supported 00:29:57.621 NVM Sets: Not Supported 00:29:57.621 Read Recovery Levels: Not Supported 00:29:57.621 Endurance Groups: Not Supported 00:29:57.621 Predictable Latency Mode: Not Supported 00:29:57.621 Traffic Based Keep ALive: Not Supported 00:29:57.621 Namespace Granularity: Not Supported 00:29:57.621 SQ Associations: Not Supported 00:29:57.621 UUID List: Not Supported 00:29:57.621 Multi-Domain Subsystem: Not Supported 00:29:57.621 Fixed Capacity Management: Not Supported 00:29:57.621 Variable Capacity Management: Not Supported 00:29:57.621 Delete Endurance Group: Not Supported 00:29:57.621 Delete NVM Set: Not Supported 00:29:57.621 Extended LBA Formats Supported: Not Supported 00:29:57.621 Flexible Data Placement Supported: Not Supported 00:29:57.621 00:29:57.621 Controller Memory Buffer Support 00:29:57.621 ================================ 00:29:57.621 Supported: No 00:29:57.621 00:29:57.621 Persistent Memory Region Support 00:29:57.621 ================================ 00:29:57.621 Supported: No 00:29:57.621 00:29:57.621 Admin Command Set Attributes 00:29:57.621 ============================ 00:29:57.621 Security Send/Receive: Not Supported 00:29:57.621 Format NVM: Not Supported 00:29:57.621 Firmware Activate/Download: Not Supported 00:29:57.621 Namespace Management: Not Supported 00:29:57.621 Device Self-Test: Not Supported 00:29:57.621 Directives: Not Supported 00:29:57.621 NVMe-MI: Not Supported 00:29:57.621 Virtualization Management: Not Supported 00:29:57.621 Doorbell Buffer Config: Not Supported 00:29:57.621 Get LBA Status Capability: Not Supported 00:29:57.621 Command & Feature Lockdown Capability: Not Supported 00:29:57.621 Abort Command Limit: 1 00:29:57.621 Async Event Request Limit: 1 00:29:57.621 Number of Firmware Slots: N/A 00:29:57.621 Firmware Slot 1 Read-Only: N/A 00:29:57.621 Firmware Activation Without Reset: N/A 00:29:57.621 Multiple Update Detection Support: N/A 00:29:57.621 Firmware Update Granularity: No Information Provided 00:29:57.621 Per-Namespace SMART Log: No 00:29:57.621 Asymmetric Namespace Access Log Page: Not Supported 00:29:57.621 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:57.621 Command Effects Log Page: Not Supported 00:29:57.621 Get Log Page Extended Data: Supported 00:29:57.621 Telemetry Log Pages: Not Supported 00:29:57.621 Persistent Event Log Pages: Not Supported 00:29:57.621 Supported Log Pages Log Page: May Support 00:29:57.621 Commands Supported & Effects Log Page: Not Supported 00:29:57.621 Feature Identifiers & Effects Log Page:May Support 00:29:57.621 NVMe-MI Commands & Effects Log Page: May Support 00:29:57.622 Data Area 4 for Telemetry Log: Not Supported 00:29:57.622 Error Log Page Entries Supported: 1 00:29:57.622 Keep Alive: Not Supported 00:29:57.622 00:29:57.622 NVM Command Set Attributes 00:29:57.622 ========================== 00:29:57.622 Submission Queue Entry Size 00:29:57.622 Max: 1 00:29:57.622 Min: 1 00:29:57.622 Completion Queue Entry Size 00:29:57.622 Max: 1 00:29:57.622 Min: 1 00:29:57.622 Number of Namespaces: 0 00:29:57.622 Compare Command: Not Supported 00:29:57.622 Write Uncorrectable Command: Not Supported 00:29:57.622 Dataset Management Command: Not Supported 00:29:57.622 Write Zeroes Command: Not Supported 00:29:57.622 Set Features Save Field: Not Supported 00:29:57.622 Reservations: Not Supported 00:29:57.622 Timestamp: Not Supported 00:29:57.622 Copy: Not Supported 00:29:57.622 Volatile Write Cache: Not Present 00:29:57.622 Atomic Write Unit (Normal): 1 00:29:57.622 Atomic Write Unit (PFail): 1 00:29:57.622 Atomic Compare & Write Unit: 1 00:29:57.622 Fused Compare & Write: Not Supported 00:29:57.622 Scatter-Gather List 00:29:57.622 SGL Command Set: Supported 00:29:57.622 SGL Keyed: Not Supported 00:29:57.622 SGL Bit Bucket Descriptor: Not Supported 00:29:57.622 SGL Metadata Pointer: Not Supported 00:29:57.622 Oversized SGL: Not Supported 00:29:57.622 SGL Metadata Address: Not Supported 00:29:57.622 SGL Offset: Supported 00:29:57.622 Transport SGL Data Block: Not Supported 00:29:57.622 Replay Protected Memory Block: Not Supported 00:29:57.622 00:29:57.622 Firmware Slot Information 00:29:57.622 ========================= 00:29:57.622 Active slot: 0 00:29:57.622 00:29:57.622 00:29:57.622 Error Log 00:29:57.622 ========= 00:29:57.622 00:29:57.622 Active Namespaces 00:29:57.622 ================= 00:29:57.622 Discovery Log Page 00:29:57.622 ================== 00:29:57.622 Generation Counter: 2 00:29:57.622 Number of Records: 2 00:29:57.622 Record Format: 0 00:29:57.622 00:29:57.622 Discovery Log Entry 0 00:29:57.622 ---------------------- 00:29:57.622 Transport Type: 3 (TCP) 00:29:57.622 Address Family: 1 (IPv4) 00:29:57.622 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:57.622 Entry Flags: 00:29:57.622 Duplicate Returned Information: 0 00:29:57.622 Explicit Persistent Connection Support for Discovery: 0 00:29:57.622 Transport Requirements: 00:29:57.622 Secure Channel: Not Specified 00:29:57.622 Port ID: 1 (0x0001) 00:29:57.622 Controller ID: 65535 (0xffff) 00:29:57.622 Admin Max SQ Size: 32 00:29:57.622 Transport Service Identifier: 4420 00:29:57.622 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:57.622 Transport Address: 10.0.0.1 00:29:57.622 Discovery Log Entry 1 00:29:57.622 ---------------------- 00:29:57.622 Transport Type: 3 (TCP) 00:29:57.622 Address Family: 1 (IPv4) 00:29:57.622 Subsystem Type: 2 (NVM Subsystem) 00:29:57.622 Entry Flags: 00:29:57.622 Duplicate Returned Information: 0 00:29:57.622 Explicit Persistent Connection Support for Discovery: 0 00:29:57.622 Transport Requirements: 00:29:57.622 Secure Channel: Not Specified 00:29:57.622 Port ID: 1 (0x0001) 00:29:57.622 Controller ID: 65535 (0xffff) 00:29:57.622 Admin Max SQ Size: 32 00:29:57.622 Transport Service Identifier: 4420 00:29:57.622 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:57.622 Transport Address: 10.0.0.1 00:29:57.622 11:17:26 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:57.881 get_feature(0x01) failed 00:29:57.881 get_feature(0x02) failed 00:29:57.881 get_feature(0x04) failed 00:29:57.881 ===================================================== 00:29:57.881 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:57.881 ===================================================== 00:29:57.881 Controller Capabilities/Features 00:29:57.881 ================================ 00:29:57.881 Vendor ID: 0000 00:29:57.881 Subsystem Vendor ID: 0000 00:29:57.881 Serial Number: c12abfb8962dfff22c72 00:29:57.881 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:57.881 Firmware Version: 6.7.0-68 00:29:57.881 Recommended Arb Burst: 6 00:29:57.881 IEEE OUI Identifier: 00 00 00 00:29:57.881 Multi-path I/O 00:29:57.881 May have multiple subsystem ports: Yes 00:29:57.881 May have multiple controllers: Yes 00:29:57.881 Associated with SR-IOV VF: No 00:29:57.881 Max Data Transfer Size: Unlimited 00:29:57.881 Max Number of Namespaces: 1024 00:29:57.881 Max Number of I/O Queues: 128 00:29:57.881 NVMe Specification Version (VS): 1.3 00:29:57.881 NVMe Specification Version (Identify): 1.3 00:29:57.881 Maximum Queue Entries: 1024 00:29:57.881 Contiguous Queues Required: No 00:29:57.881 Arbitration Mechanisms Supported 00:29:57.881 Weighted Round Robin: Not Supported 00:29:57.881 Vendor Specific: Not Supported 00:29:57.881 Reset Timeout: 7500 ms 00:29:57.881 Doorbell Stride: 4 bytes 00:29:57.881 NVM Subsystem Reset: Not Supported 00:29:57.881 Command Sets Supported 00:29:57.881 NVM Command Set: Supported 00:29:57.881 Boot Partition: Not Supported 00:29:57.881 Memory Page Size Minimum: 4096 bytes 00:29:57.881 Memory Page Size Maximum: 4096 bytes 00:29:57.881 Persistent Memory Region: Not Supported 00:29:57.881 Optional Asynchronous Events Supported 00:29:57.881 Namespace Attribute Notices: Supported 00:29:57.881 Firmware Activation Notices: Not Supported 00:29:57.881 ANA Change Notices: Supported 00:29:57.881 PLE Aggregate Log Change Notices: Not Supported 00:29:57.881 LBA Status Info Alert Notices: Not Supported 00:29:57.881 EGE Aggregate Log Change Notices: Not Supported 00:29:57.881 Normal NVM Subsystem Shutdown event: Not Supported 00:29:57.881 Zone Descriptor Change Notices: Not Supported 00:29:57.881 Discovery Log Change Notices: Not Supported 00:29:57.881 Controller Attributes 00:29:57.881 128-bit Host Identifier: Supported 00:29:57.881 Non-Operational Permissive Mode: Not Supported 00:29:57.881 NVM Sets: Not Supported 00:29:57.881 Read Recovery Levels: Not Supported 00:29:57.881 Endurance Groups: Not Supported 00:29:57.881 Predictable Latency Mode: Not Supported 00:29:57.881 Traffic Based Keep ALive: Supported 00:29:57.881 Namespace Granularity: Not Supported 00:29:57.881 SQ Associations: Not Supported 00:29:57.881 UUID List: Not Supported 00:29:57.881 Multi-Domain Subsystem: Not Supported 00:29:57.881 Fixed Capacity Management: Not Supported 00:29:57.881 Variable Capacity Management: Not Supported 00:29:57.881 Delete Endurance Group: Not Supported 00:29:57.881 Delete NVM Set: Not Supported 00:29:57.881 Extended LBA Formats Supported: Not Supported 00:29:57.881 Flexible Data Placement Supported: Not Supported 00:29:57.881 00:29:57.881 Controller Memory Buffer Support 00:29:57.881 ================================ 00:29:57.881 Supported: No 00:29:57.881 00:29:57.881 Persistent Memory Region Support 00:29:57.881 ================================ 00:29:57.881 Supported: No 00:29:57.881 00:29:57.881 Admin Command Set Attributes 00:29:57.881 ============================ 00:29:57.881 Security Send/Receive: Not Supported 00:29:57.881 Format NVM: Not Supported 00:29:57.881 Firmware Activate/Download: Not Supported 00:29:57.881 Namespace Management: Not Supported 00:29:57.881 Device Self-Test: Not Supported 00:29:57.881 Directives: Not Supported 00:29:57.881 NVMe-MI: Not Supported 00:29:57.881 Virtualization Management: Not Supported 00:29:57.881 Doorbell Buffer Config: Not Supported 00:29:57.881 Get LBA Status Capability: Not Supported 00:29:57.881 Command & Feature Lockdown Capability: Not Supported 00:29:57.881 Abort Command Limit: 4 00:29:57.881 Async Event Request Limit: 4 00:29:57.881 Number of Firmware Slots: N/A 00:29:57.881 Firmware Slot 1 Read-Only: N/A 00:29:57.881 Firmware Activation Without Reset: N/A 00:29:57.881 Multiple Update Detection Support: N/A 00:29:57.881 Firmware Update Granularity: No Information Provided 00:29:57.881 Per-Namespace SMART Log: Yes 00:29:57.881 Asymmetric Namespace Access Log Page: Supported 00:29:57.881 ANA Transition Time : 10 sec 00:29:57.881 00:29:57.881 Asymmetric Namespace Access Capabilities 00:29:57.881 ANA Optimized State : Supported 00:29:57.881 ANA Non-Optimized State : Supported 00:29:57.881 ANA Inaccessible State : Supported 00:29:57.881 ANA Persistent Loss State : Supported 00:29:57.881 ANA Change State : Supported 00:29:57.881 ANAGRPID is not changed : No 00:29:57.881 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:57.881 00:29:57.881 ANA Group Identifier Maximum : 128 00:29:57.881 Number of ANA Group Identifiers : 128 00:29:57.881 Max Number of Allowed Namespaces : 1024 00:29:57.881 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:57.881 Command Effects Log Page: Supported 00:29:57.881 Get Log Page Extended Data: Supported 00:29:57.881 Telemetry Log Pages: Not Supported 00:29:57.881 Persistent Event Log Pages: Not Supported 00:29:57.881 Supported Log Pages Log Page: May Support 00:29:57.881 Commands Supported & Effects Log Page: Not Supported 00:29:57.881 Feature Identifiers & Effects Log Page:May Support 00:29:57.881 NVMe-MI Commands & Effects Log Page: May Support 00:29:57.881 Data Area 4 for Telemetry Log: Not Supported 00:29:57.881 Error Log Page Entries Supported: 128 00:29:57.881 Keep Alive: Supported 00:29:57.881 Keep Alive Granularity: 1000 ms 00:29:57.881 00:29:57.881 NVM Command Set Attributes 00:29:57.881 ========================== 00:29:57.881 Submission Queue Entry Size 00:29:57.881 Max: 64 00:29:57.881 Min: 64 00:29:57.881 Completion Queue Entry Size 00:29:57.881 Max: 16 00:29:57.881 Min: 16 00:29:57.881 Number of Namespaces: 1024 00:29:57.881 Compare Command: Not Supported 00:29:57.881 Write Uncorrectable Command: Not Supported 00:29:57.881 Dataset Management Command: Supported 00:29:57.881 Write Zeroes Command: Supported 00:29:57.881 Set Features Save Field: Not Supported 00:29:57.881 Reservations: Not Supported 00:29:57.881 Timestamp: Not Supported 00:29:57.881 Copy: Not Supported 00:29:57.881 Volatile Write Cache: Present 00:29:57.881 Atomic Write Unit (Normal): 1 00:29:57.881 Atomic Write Unit (PFail): 1 00:29:57.881 Atomic Compare & Write Unit: 1 00:29:57.881 Fused Compare & Write: Not Supported 00:29:57.881 Scatter-Gather List 00:29:57.882 SGL Command Set: Supported 00:29:57.882 SGL Keyed: Not Supported 00:29:57.882 SGL Bit Bucket Descriptor: Not Supported 00:29:57.882 SGL Metadata Pointer: Not Supported 00:29:57.882 Oversized SGL: Not Supported 00:29:57.882 SGL Metadata Address: Not Supported 00:29:57.882 SGL Offset: Supported 00:29:57.882 Transport SGL Data Block: Not Supported 00:29:57.882 Replay Protected Memory Block: Not Supported 00:29:57.882 00:29:57.882 Firmware Slot Information 00:29:57.882 ========================= 00:29:57.882 Active slot: 0 00:29:57.882 00:29:57.882 Asymmetric Namespace Access 00:29:57.882 =========================== 00:29:57.882 Change Count : 0 00:29:57.882 Number of ANA Group Descriptors : 1 00:29:57.882 ANA Group Descriptor : 0 00:29:57.882 ANA Group ID : 1 00:29:57.882 Number of NSID Values : 1 00:29:57.882 Change Count : 0 00:29:57.882 ANA State : 1 00:29:57.882 Namespace Identifier : 1 00:29:57.882 00:29:57.882 Commands Supported and Effects 00:29:57.882 ============================== 00:29:57.882 Admin Commands 00:29:57.882 -------------- 00:29:57.882 Get Log Page (02h): Supported 00:29:57.882 Identify (06h): Supported 00:29:57.882 Abort (08h): Supported 00:29:57.882 Set Features (09h): Supported 00:29:57.882 Get Features (0Ah): Supported 00:29:57.882 Asynchronous Event Request (0Ch): Supported 00:29:57.882 Keep Alive (18h): Supported 00:29:57.882 I/O Commands 00:29:57.882 ------------ 00:29:57.882 Flush (00h): Supported 00:29:57.882 Write (01h): Supported LBA-Change 00:29:57.882 Read (02h): Supported 00:29:57.882 Write Zeroes (08h): Supported LBA-Change 00:29:57.882 Dataset Management (09h): Supported 00:29:57.882 00:29:57.882 Error Log 00:29:57.882 ========= 00:29:57.882 Entry: 0 00:29:57.882 Error Count: 0x3 00:29:57.882 Submission Queue Id: 0x0 00:29:57.882 Command Id: 0x5 00:29:57.882 Phase Bit: 0 00:29:57.882 Status Code: 0x2 00:29:57.882 Status Code Type: 0x0 00:29:57.882 Do Not Retry: 1 00:29:57.882 Error Location: 0x28 00:29:57.882 LBA: 0x0 00:29:57.882 Namespace: 0x0 00:29:57.882 Vendor Log Page: 0x0 00:29:57.882 ----------- 00:29:57.882 Entry: 1 00:29:57.882 Error Count: 0x2 00:29:57.882 Submission Queue Id: 0x0 00:29:57.882 Command Id: 0x5 00:29:57.882 Phase Bit: 0 00:29:57.882 Status Code: 0x2 00:29:57.882 Status Code Type: 0x0 00:29:57.882 Do Not Retry: 1 00:29:57.882 Error Location: 0x28 00:29:57.882 LBA: 0x0 00:29:57.882 Namespace: 0x0 00:29:57.882 Vendor Log Page: 0x0 00:29:57.882 ----------- 00:29:57.882 Entry: 2 00:29:57.882 Error Count: 0x1 00:29:57.882 Submission Queue Id: 0x0 00:29:57.882 Command Id: 0x4 00:29:57.882 Phase Bit: 0 00:29:57.882 Status Code: 0x2 00:29:57.882 Status Code Type: 0x0 00:29:57.882 Do Not Retry: 1 00:29:57.882 Error Location: 0x28 00:29:57.882 LBA: 0x0 00:29:57.882 Namespace: 0x0 00:29:57.882 Vendor Log Page: 0x0 00:29:57.882 00:29:57.882 Number of Queues 00:29:57.882 ================ 00:29:57.882 Number of I/O Submission Queues: 128 00:29:57.882 Number of I/O Completion Queues: 128 00:29:57.882 00:29:57.882 ZNS Specific Controller Data 00:29:57.882 ============================ 00:29:57.882 Zone Append Size Limit: 0 00:29:57.882 00:29:57.882 00:29:57.882 Active Namespaces 00:29:57.882 ================= 00:29:57.882 get_feature(0x05) failed 00:29:57.882 Namespace ID:1 00:29:57.882 Command Set Identifier: NVM (00h) 00:29:57.882 Deallocate: Supported 00:29:57.882 Deallocated/Unwritten Error: Not Supported 00:29:57.882 Deallocated Read Value: Unknown 00:29:57.882 Deallocate in Write Zeroes: Not Supported 00:29:57.882 Deallocated Guard Field: 0xFFFF 00:29:57.882 Flush: Supported 00:29:57.882 Reservation: Not Supported 00:29:57.882 Namespace Sharing Capabilities: Multiple Controllers 00:29:57.882 Size (in LBAs): 1310720 (5GiB) 00:29:57.882 Capacity (in LBAs): 1310720 (5GiB) 00:29:57.882 Utilization (in LBAs): 1310720 (5GiB) 00:29:57.882 UUID: e5a89b28-4c55-4631-b52f-7946f4286b1b 00:29:57.882 Thin Provisioning: Not Supported 00:29:57.882 Per-NS Atomic Units: Yes 00:29:57.882 Atomic Boundary Size (Normal): 0 00:29:57.882 Atomic Boundary Size (PFail): 0 00:29:57.882 Atomic Boundary Offset: 0 00:29:57.882 NGUID/EUI64 Never Reused: No 00:29:57.882 ANA group ID: 1 00:29:57.882 Namespace Write Protected: No 00:29:57.882 Number of LBA Formats: 1 00:29:57.882 Current LBA Format: LBA Format #00 00:29:57.882 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:29:57.882 00:29:57.882 11:17:26 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:57.882 11:17:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:57.882 11:17:26 -- nvmf/common.sh@117 -- # sync 00:29:57.882 11:17:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.882 11:17:26 -- nvmf/common.sh@120 -- # set +e 00:29:57.882 11:17:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.882 11:17:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.882 rmmod nvme_tcp 00:29:57.882 rmmod nvme_fabrics 00:29:57.882 11:17:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.882 11:17:26 -- nvmf/common.sh@124 -- # set -e 00:29:57.882 11:17:26 -- nvmf/common.sh@125 -- # return 0 00:29:57.882 11:17:26 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:29:57.882 11:17:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:57.882 11:17:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:57.882 11:17:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:57.882 11:17:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.882 11:17:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.882 11:17:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.882 11:17:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.882 11:17:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.882 11:17:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:57.882 11:17:26 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:57.882 11:17:26 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:57.882 11:17:26 -- nvmf/common.sh@675 -- # echo 0 00:29:57.882 11:17:26 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:57.882 11:17:26 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:57.882 11:17:26 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:57.882 11:17:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:57.882 11:17:26 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:57.882 11:17:26 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:57.882 11:17:26 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:58.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:58.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:58.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:58.818 00:29:58.818 real 0m2.789s 00:29:58.818 user 0m0.906s 00:29:58.818 sys 0m1.368s 00:29:58.818 11:17:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:58.818 ************************************ 00:29:58.818 END TEST nvmf_identify_kernel_target 00:29:58.818 ************************************ 00:29:58.818 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:29:58.818 11:17:27 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:58.818 11:17:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:58.818 11:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:58.818 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:29:59.077 ************************************ 00:29:59.077 START TEST nvmf_auth 00:29:59.077 ************************************ 00:29:59.077 11:17:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:59.077 * Looking for test storage... 00:29:59.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:59.077 11:17:27 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:59.077 11:17:27 -- nvmf/common.sh@7 -- # uname -s 00:29:59.077 11:17:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.077 11:17:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.077 11:17:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.077 11:17:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.077 11:17:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.077 11:17:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.077 11:17:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.077 11:17:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.077 11:17:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.077 11:17:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.077 11:17:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:59.077 11:17:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:29:59.077 11:17:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.077 11:17:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.077 11:17:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:59.077 11:17:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.077 11:17:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:59.077 11:17:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.077 11:17:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.077 11:17:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.078 11:17:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.078 11:17:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.078 11:17:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.078 11:17:27 -- paths/export.sh@5 -- # export PATH 00:29:59.078 11:17:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.078 11:17:27 -- nvmf/common.sh@47 -- # : 0 00:29:59.078 11:17:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:59.078 11:17:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:59.078 11:17:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.078 11:17:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.078 11:17:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.078 11:17:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:59.078 11:17:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:59.078 11:17:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:59.078 11:17:27 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:59.078 11:17:27 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:59.078 11:17:27 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:59.078 11:17:27 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:59.078 11:17:27 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:59.078 11:17:27 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:59.078 11:17:27 -- host/auth.sh@21 -- # keys=() 00:29:59.078 11:17:27 -- host/auth.sh@77 -- # nvmftestinit 00:29:59.078 11:17:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:59.078 11:17:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.078 11:17:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:59.078 11:17:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:59.078 11:17:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:59.078 11:17:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.078 11:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.078 11:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.078 11:17:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:59.078 11:17:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:59.078 11:17:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:59.078 11:17:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:59.078 11:17:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:59.078 11:17:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:59.078 11:17:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.078 11:17:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.078 11:17:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:59.078 11:17:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:59.078 11:17:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:59.078 11:17:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:59.078 11:17:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:59.078 11:17:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.078 11:17:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:59.078 11:17:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:59.078 11:17:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:59.078 11:17:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:59.078 11:17:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:59.078 11:17:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:59.078 Cannot find device "nvmf_tgt_br" 00:29:59.078 11:17:27 -- nvmf/common.sh@155 -- # true 00:29:59.078 11:17:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:59.078 Cannot find device "nvmf_tgt_br2" 00:29:59.078 11:17:27 -- nvmf/common.sh@156 -- # true 00:29:59.078 11:17:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:59.078 11:17:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:59.078 Cannot find device "nvmf_tgt_br" 00:29:59.078 11:17:27 -- nvmf/common.sh@158 -- # true 00:29:59.078 11:17:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:59.078 Cannot find device "nvmf_tgt_br2" 00:29:59.078 11:17:27 -- nvmf/common.sh@159 -- # true 00:29:59.078 11:17:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:59.336 11:17:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:59.336 11:17:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:59.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:59.336 11:17:27 -- nvmf/common.sh@162 -- # true 00:29:59.336 11:17:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:59.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:59.336 11:17:27 -- nvmf/common.sh@163 -- # true 00:29:59.336 11:17:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:59.336 11:17:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:59.336 11:17:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:59.336 11:17:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:59.336 11:17:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:59.336 11:17:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:59.336 11:17:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:59.336 11:17:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:59.336 11:17:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:59.336 11:17:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:59.336 11:17:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:59.336 11:17:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:59.336 11:17:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:59.336 11:17:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:59.336 11:17:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:59.336 11:17:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:59.336 11:17:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:59.336 11:17:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:59.336 11:17:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:59.336 11:17:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:59.336 11:17:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:59.336 11:17:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:59.336 11:17:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:59.336 11:17:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:59.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:29:59.336 00:29:59.336 --- 10.0.0.2 ping statistics --- 00:29:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.336 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:29:59.336 11:17:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:59.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:59.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:29:59.336 00:29:59.336 --- 10.0.0.3 ping statistics --- 00:29:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.336 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:29:59.336 11:17:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:59.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:29:59.336 00:29:59.336 --- 10.0.0.1 ping statistics --- 00:29:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.336 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:29:59.336 11:17:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.336 11:17:27 -- nvmf/common.sh@422 -- # return 0 00:29:59.336 11:17:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:59.336 11:17:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.336 11:17:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:59.336 11:17:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:59.336 11:17:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.336 11:17:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:59.336 11:17:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:59.336 11:17:27 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:29:59.336 11:17:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:59.336 11:17:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:59.336 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:29:59.336 11:17:27 -- nvmf/common.sh@470 -- # nvmfpid=102867 00:29:59.336 11:17:27 -- nvmf/common.sh@471 -- # waitforlisten 102867 00:29:59.336 11:17:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:59.336 11:17:27 -- common/autotest_common.sh@817 -- # '[' -z 102867 ']' 00:29:59.336 11:17:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.336 11:17:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:59.336 11:17:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.336 11:17:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:59.336 11:17:27 -- common/autotest_common.sh@10 -- # set +x 00:30:00.712 11:17:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:00.712 11:17:28 -- common/autotest_common.sh@850 -- # return 0 00:30:00.712 11:17:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:00.712 11:17:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:00.712 11:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:00.712 11:17:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.712 11:17:29 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:00.712 11:17:29 -- host/auth.sh@81 -- # gen_key null 32 00:30:00.712 11:17:29 -- host/auth.sh@53 -- # local digest len file key 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # local -A digests 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # digest=null 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # len=32 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # key=95e80f82105886c05c13e5cc8e3de6d9 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.p5A 00:30:00.712 11:17:29 -- host/auth.sh@59 -- # format_dhchap_key 95e80f82105886c05c13e5cc8e3de6d9 0 00:30:00.712 11:17:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 95e80f82105886c05c13e5cc8e3de6d9 0 00:30:00.712 11:17:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # key=95e80f82105886c05c13e5cc8e3de6d9 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # digest=0 00:30:00.712 11:17:29 -- nvmf/common.sh@694 -- # python - 00:30:00.712 11:17:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.p5A 00:30:00.712 11:17:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.p5A 00:30:00.712 11:17:29 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.p5A 00:30:00.712 11:17:29 -- host/auth.sh@82 -- # gen_key null 48 00:30:00.712 11:17:29 -- host/auth.sh@53 -- # local digest len file key 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # local -A digests 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # digest=null 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # len=48 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # key=9fced86c2dffd3195145d0a143493e949f0565bfbdcc24ba 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.OxF 00:30:00.712 11:17:29 -- host/auth.sh@59 -- # format_dhchap_key 9fced86c2dffd3195145d0a143493e949f0565bfbdcc24ba 0 00:30:00.712 11:17:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 9fced86c2dffd3195145d0a143493e949f0565bfbdcc24ba 0 00:30:00.712 11:17:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # key=9fced86c2dffd3195145d0a143493e949f0565bfbdcc24ba 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # digest=0 00:30:00.712 11:17:29 -- nvmf/common.sh@694 -- # python - 00:30:00.712 11:17:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.OxF 00:30:00.712 11:17:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.OxF 00:30:00.712 11:17:29 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.OxF 00:30:00.712 11:17:29 -- host/auth.sh@83 -- # gen_key sha256 32 00:30:00.712 11:17:29 -- host/auth.sh@53 -- # local digest len file key 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # local -A digests 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # digest=sha256 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # len=32 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # key=d29917e9c04cde9c7bd24c0b7e2b3e1b 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.tv7 00:30:00.712 11:17:29 -- host/auth.sh@59 -- # format_dhchap_key d29917e9c04cde9c7bd24c0b7e2b3e1b 1 00:30:00.712 11:17:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 d29917e9c04cde9c7bd24c0b7e2b3e1b 1 00:30:00.712 11:17:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # key=d29917e9c04cde9c7bd24c0b7e2b3e1b 00:30:00.712 11:17:29 -- nvmf/common.sh@693 -- # digest=1 00:30:00.712 11:17:29 -- nvmf/common.sh@694 -- # python - 00:30:00.712 11:17:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.tv7 00:30:00.712 11:17:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.tv7 00:30:00.712 11:17:29 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.tv7 00:30:00.712 11:17:29 -- host/auth.sh@84 -- # gen_key sha384 48 00:30:00.712 11:17:29 -- host/auth.sh@53 -- # local digest len file key 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.712 11:17:29 -- host/auth.sh@54 -- # local -A digests 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # digest=sha384 00:30:00.712 11:17:29 -- host/auth.sh@56 -- # len=48 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:00.712 11:17:29 -- host/auth.sh@57 -- # key=048918508193cece115b913dc35c36282bc0c13c9c59ea0f 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:30:00.712 11:17:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.q6e 00:30:00.713 11:17:29 -- host/auth.sh@59 -- # format_dhchap_key 048918508193cece115b913dc35c36282bc0c13c9c59ea0f 2 00:30:00.713 11:17:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 048918508193cece115b913dc35c36282bc0c13c9c59ea0f 2 00:30:00.713 11:17:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:00.713 11:17:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:00.713 11:17:29 -- nvmf/common.sh@693 -- # key=048918508193cece115b913dc35c36282bc0c13c9c59ea0f 00:30:00.713 11:17:29 -- nvmf/common.sh@693 -- # digest=2 00:30:00.713 11:17:29 -- nvmf/common.sh@694 -- # python - 00:30:00.713 11:17:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.q6e 00:30:00.713 11:17:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.q6e 00:30:00.713 11:17:29 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.q6e 00:30:00.713 11:17:29 -- host/auth.sh@85 -- # gen_key sha512 64 00:30:00.713 11:17:29 -- host/auth.sh@53 -- # local digest len file key 00:30:00.713 11:17:29 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:00.713 11:17:29 -- host/auth.sh@54 -- # local -A digests 00:30:00.713 11:17:29 -- host/auth.sh@56 -- # digest=sha512 00:30:00.713 11:17:29 -- host/auth.sh@56 -- # len=64 00:30:00.713 11:17:29 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:00.713 11:17:29 -- host/auth.sh@57 -- # key=946ac4e0da6666ea22d3f2296c30279efe77c8ca5763157fdd3c041c93fbe5fb 00:30:00.713 11:17:29 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:30:00.713 11:17:29 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.gfr 00:30:00.713 11:17:29 -- host/auth.sh@59 -- # format_dhchap_key 946ac4e0da6666ea22d3f2296c30279efe77c8ca5763157fdd3c041c93fbe5fb 3 00:30:00.713 11:17:29 -- nvmf/common.sh@708 -- # format_key DHHC-1 946ac4e0da6666ea22d3f2296c30279efe77c8ca5763157fdd3c041c93fbe5fb 3 00:30:00.713 11:17:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:00.713 11:17:29 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:00.713 11:17:29 -- nvmf/common.sh@693 -- # key=946ac4e0da6666ea22d3f2296c30279efe77c8ca5763157fdd3c041c93fbe5fb 00:30:00.713 11:17:29 -- nvmf/common.sh@693 -- # digest=3 00:30:00.713 11:17:29 -- nvmf/common.sh@694 -- # python - 00:30:00.971 11:17:29 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.gfr 00:30:00.971 11:17:29 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.gfr 00:30:00.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.971 11:17:29 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.gfr 00:30:00.971 11:17:29 -- host/auth.sh@87 -- # waitforlisten 102867 00:30:00.971 11:17:29 -- common/autotest_common.sh@817 -- # '[' -z 102867 ']' 00:30:00.971 11:17:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.971 11:17:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:00.971 11:17:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.971 11:17:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:00.971 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:01.229 11:17:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:01.229 11:17:29 -- common/autotest_common.sh@850 -- # return 0 00:30:01.229 11:17:29 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:01.229 11:17:29 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.p5A 00:30:01.229 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.229 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:01.229 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.229 11:17:29 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:01.229 11:17:29 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OxF 00:30:01.229 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.229 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:01.229 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.229 11:17:29 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:01.229 11:17:29 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tv7 00:30:01.229 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.229 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:01.229 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.229 11:17:29 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:01.229 11:17:29 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.q6e 00:30:01.229 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.229 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:01.229 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.229 11:17:29 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:01.229 11:17:29 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gfr 00:30:01.229 11:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.229 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:01.229 11:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.229 11:17:29 -- host/auth.sh@92 -- # nvmet_auth_init 00:30:01.229 11:17:29 -- host/auth.sh@35 -- # get_main_ns_ip 00:30:01.229 11:17:29 -- nvmf/common.sh@717 -- # local ip 00:30:01.229 11:17:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:01.229 11:17:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:01.229 11:17:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.229 11:17:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.229 11:17:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:01.229 11:17:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.229 11:17:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:01.229 11:17:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:01.229 11:17:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:01.229 11:17:29 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:01.229 11:17:29 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:01.229 11:17:29 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:01.229 11:17:29 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:01.229 11:17:29 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:01.229 11:17:29 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:01.229 11:17:29 -- nvmf/common.sh@628 -- # local block nvme 00:30:01.229 11:17:29 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:01.229 11:17:29 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:01.229 11:17:29 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:01.229 11:17:29 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:01.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:01.486 Waiting for block devices as requested 00:30:01.742 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:01.742 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:02.307 11:17:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:02.307 11:17:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:02.307 11:17:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:02.307 11:17:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:02.307 11:17:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:02.307 11:17:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:02.307 11:17:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:02.307 11:17:30 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:02.307 11:17:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:02.307 No valid GPT data, bailing 00:30:02.307 11:17:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:02.307 11:17:30 -- scripts/common.sh@391 -- # pt= 00:30:02.307 11:17:30 -- scripts/common.sh@392 -- # return 1 00:30:02.307 11:17:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:02.307 11:17:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:02.307 11:17:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:02.307 11:17:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:30:02.307 11:17:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:30:02.307 11:17:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:02.307 11:17:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:02.307 11:17:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:30:02.307 11:17:30 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:02.307 11:17:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:02.307 No valid GPT data, bailing 00:30:02.307 11:17:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:02.307 11:17:30 -- scripts/common.sh@391 -- # pt= 00:30:02.307 11:17:30 -- scripts/common.sh@392 -- # return 1 00:30:02.307 11:17:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:30:02.307 11:17:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:02.307 11:17:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:02.307 11:17:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:30:02.565 11:17:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:30:02.565 11:17:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:02.565 11:17:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:02.565 11:17:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:30:02.565 11:17:30 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:02.566 11:17:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:02.566 No valid GPT data, bailing 00:30:02.566 11:17:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:02.566 11:17:31 -- scripts/common.sh@391 -- # pt= 00:30:02.566 11:17:31 -- scripts/common.sh@392 -- # return 1 00:30:02.566 11:17:31 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:30:02.566 11:17:31 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:02.566 11:17:31 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:02.566 11:17:31 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:30:02.566 11:17:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:30:02.566 11:17:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:02.566 11:17:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:02.566 11:17:31 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:30:02.566 11:17:31 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:02.566 11:17:31 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:02.566 No valid GPT data, bailing 00:30:02.566 11:17:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:02.566 11:17:31 -- scripts/common.sh@391 -- # pt= 00:30:02.566 11:17:31 -- scripts/common.sh@392 -- # return 1 00:30:02.566 11:17:31 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:30:02.566 11:17:31 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:30:02.566 11:17:31 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:02.566 11:17:31 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:02.566 11:17:31 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:02.566 11:17:31 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:02.566 11:17:31 -- nvmf/common.sh@656 -- # echo 1 00:30:02.566 11:17:31 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:30:02.566 11:17:31 -- nvmf/common.sh@658 -- # echo 1 00:30:02.566 11:17:31 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:02.566 11:17:31 -- nvmf/common.sh@661 -- # echo tcp 00:30:02.566 11:17:31 -- nvmf/common.sh@662 -- # echo 4420 00:30:02.566 11:17:31 -- nvmf/common.sh@663 -- # echo ipv4 00:30:02.566 11:17:31 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:02.566 11:17:31 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -a 10.0.0.1 -t tcp -s 4420 00:30:02.566 00:30:02.566 Discovery Log Number of Records 2, Generation counter 2 00:30:02.566 =====Discovery Log Entry 0====== 00:30:02.566 trtype: tcp 00:30:02.566 adrfam: ipv4 00:30:02.566 subtype: current discovery subsystem 00:30:02.566 treq: not specified, sq flow control disable supported 00:30:02.566 portid: 1 00:30:02.566 trsvcid: 4420 00:30:02.566 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:02.566 traddr: 10.0.0.1 00:30:02.566 eflags: none 00:30:02.566 sectype: none 00:30:02.566 =====Discovery Log Entry 1====== 00:30:02.566 trtype: tcp 00:30:02.566 adrfam: ipv4 00:30:02.566 subtype: nvme subsystem 00:30:02.566 treq: not specified, sq flow control disable supported 00:30:02.566 portid: 1 00:30:02.566 trsvcid: 4420 00:30:02.566 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:02.566 traddr: 10.0.0.1 00:30:02.566 eflags: none 00:30:02.566 sectype: none 00:30:02.566 11:17:31 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:02.566 11:17:31 -- host/auth.sh@37 -- # echo 0 00:30:02.566 11:17:31 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:02.566 11:17:31 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:02.566 11:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:02.566 11:17:31 -- host/auth.sh@44 -- # digest=sha256 00:30:02.566 11:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.566 11:17:31 -- host/auth.sh@44 -- # keyid=1 00:30:02.566 11:17:31 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:02.566 11:17:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:02.566 11:17:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:02.825 11:17:31 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:02.825 11:17:31 -- host/auth.sh@100 -- # IFS=, 00:30:02.825 11:17:31 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:30:02.825 11:17:31 -- host/auth.sh@100 -- # IFS=, 00:30:02.825 11:17:31 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:02.825 11:17:31 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:02.825 11:17:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:02.825 11:17:31 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:30:02.825 11:17:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:02.825 11:17:31 -- host/auth.sh@68 -- # keyid=1 00:30:02.825 11:17:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:02.825 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.825 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:02.825 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.825 11:17:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:02.825 11:17:31 -- nvmf/common.sh@717 -- # local ip 00:30:02.825 11:17:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:02.825 11:17:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:02.825 11:17:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.825 11:17:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.825 11:17:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:02.825 11:17:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.825 11:17:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:02.825 11:17:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:02.825 11:17:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:02.825 11:17:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:02.825 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.825 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:02.825 nvme0n1 00:30:02.825 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.825 11:17:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.825 11:17:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:02.825 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.825 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:02.825 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.825 11:17:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.825 11:17:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.825 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.825 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:02.825 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.825 11:17:31 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:02.825 11:17:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:02.825 11:17:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:02.825 11:17:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:02.825 11:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:02.825 11:17:31 -- host/auth.sh@44 -- # digest=sha256 00:30:02.825 11:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:02.825 11:17:31 -- host/auth.sh@44 -- # keyid=0 00:30:02.825 11:17:31 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:02.825 11:17:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:02.825 11:17:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:02.825 11:17:31 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:02.825 11:17:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:30:02.825 11:17:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:02.825 11:17:31 -- host/auth.sh@68 -- # digest=sha256 00:30:02.825 11:17:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:02.825 11:17:31 -- host/auth.sh@68 -- # keyid=0 00:30:02.825 11:17:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:02.825 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.825 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:02.825 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.825 11:17:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:02.825 11:17:31 -- nvmf/common.sh@717 -- # local ip 00:30:02.825 11:17:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:02.825 11:17:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:02.825 11:17:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.825 11:17:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.825 11:17:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:02.825 11:17:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.825 11:17:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:02.825 11:17:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:02.825 11:17:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:02.825 11:17:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:02.825 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.825 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.092 nvme0n1 00:30:03.092 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.092 11:17:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.092 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.092 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.092 11:17:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.092 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.092 11:17:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.092 11:17:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.092 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.092 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.092 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.092 11:17:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.092 11:17:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:03.092 11:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.092 11:17:31 -- host/auth.sh@44 -- # digest=sha256 00:30:03.092 11:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.092 11:17:31 -- host/auth.sh@44 -- # keyid=1 00:30:03.092 11:17:31 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:03.092 11:17:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.092 11:17:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:03.092 11:17:31 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:03.092 11:17:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:30:03.092 11:17:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:03.092 11:17:31 -- host/auth.sh@68 -- # digest=sha256 00:30:03.092 11:17:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:03.092 11:17:31 -- host/auth.sh@68 -- # keyid=1 00:30:03.092 11:17:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:03.092 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.092 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.092 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.092 11:17:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:03.092 11:17:31 -- nvmf/common.sh@717 -- # local ip 00:30:03.092 11:17:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:03.092 11:17:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:03.092 11:17:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.092 11:17:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.092 11:17:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:03.092 11:17:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.092 11:17:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:03.092 11:17:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:03.092 11:17:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:03.092 11:17:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:03.092 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.092 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 nvme0n1 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.351 11:17:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.351 11:17:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:03.351 11:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.351 11:17:31 -- host/auth.sh@44 -- # digest=sha256 00:30:03.351 11:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.351 11:17:31 -- host/auth.sh@44 -- # keyid=2 00:30:03.351 11:17:31 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:03.351 11:17:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.351 11:17:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:03.351 11:17:31 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:03.351 11:17:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:30:03.351 11:17:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:03.351 11:17:31 -- host/auth.sh@68 -- # digest=sha256 00:30:03.351 11:17:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:03.351 11:17:31 -- host/auth.sh@68 -- # keyid=2 00:30:03.351 11:17:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:03.351 11:17:31 -- nvmf/common.sh@717 -- # local ip 00:30:03.351 11:17:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:03.351 11:17:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:03.351 11:17:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.351 11:17:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.351 11:17:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:03.351 11:17:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.351 11:17:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:03.351 11:17:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:03.351 11:17:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:03.351 11:17:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 nvme0n1 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.351 11:17:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.351 11:17:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:03.351 11:17:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.351 11:17:31 -- host/auth.sh@44 -- # digest=sha256 00:30:03.351 11:17:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.351 11:17:31 -- host/auth.sh@44 -- # keyid=3 00:30:03.351 11:17:31 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:03.351 11:17:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.351 11:17:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:03.351 11:17:31 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:03.351 11:17:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:30:03.351 11:17:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:03.351 11:17:31 -- host/auth.sh@68 -- # digest=sha256 00:30:03.351 11:17:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:03.351 11:17:31 -- host/auth.sh@68 -- # keyid=3 00:30:03.351 11:17:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.351 11:17:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.351 11:17:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:03.351 11:17:31 -- nvmf/common.sh@717 -- # local ip 00:30:03.351 11:17:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:03.351 11:17:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:03.351 11:17:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.351 11:17:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.351 11:17:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:03.351 11:17:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.351 11:17:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:03.351 11:17:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:03.351 11:17:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:03.351 11:17:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:03.351 11:17:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.351 11:17:31 -- common/autotest_common.sh@10 -- # set +x 00:30:03.628 nvme0n1 00:30:03.628 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.628 11:17:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.628 11:17:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.628 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.628 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.628 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.628 11:17:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.628 11:17:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.628 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.628 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.628 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.628 11:17:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.628 11:17:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:03.628 11:17:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.628 11:17:32 -- host/auth.sh@44 -- # digest=sha256 00:30:03.628 11:17:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.628 11:17:32 -- host/auth.sh@44 -- # keyid=4 00:30:03.628 11:17:32 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:03.628 11:17:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.628 11:17:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:03.628 11:17:32 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:03.628 11:17:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:30:03.628 11:17:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:03.628 11:17:32 -- host/auth.sh@68 -- # digest=sha256 00:30:03.628 11:17:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:03.628 11:17:32 -- host/auth.sh@68 -- # keyid=4 00:30:03.628 11:17:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:03.628 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.628 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.628 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.628 11:17:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:03.628 11:17:32 -- nvmf/common.sh@717 -- # local ip 00:30:03.628 11:17:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:03.628 11:17:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:03.628 11:17:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.628 11:17:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.628 11:17:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:03.628 11:17:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.628 11:17:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:03.628 11:17:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:03.628 11:17:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:03.628 11:17:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:03.628 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.628 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.628 nvme0n1 00:30:03.628 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.628 11:17:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.628 11:17:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:03.628 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.628 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.886 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.886 11:17:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.886 11:17:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.886 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:03.886 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:03.886 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:03.886 11:17:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.886 11:17:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:03.886 11:17:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:03.886 11:17:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:03.886 11:17:32 -- host/auth.sh@44 -- # digest=sha256 00:30:03.886 11:17:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:03.886 11:17:32 -- host/auth.sh@44 -- # keyid=0 00:30:03.886 11:17:32 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:03.886 11:17:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:03.886 11:17:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:04.145 11:17:32 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:04.145 11:17:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:30:04.145 11:17:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.145 11:17:32 -- host/auth.sh@68 -- # digest=sha256 00:30:04.145 11:17:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:04.145 11:17:32 -- host/auth.sh@68 -- # keyid=0 00:30:04.145 11:17:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:04.145 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.145 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.145 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.145 11:17:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.145 11:17:32 -- nvmf/common.sh@717 -- # local ip 00:30:04.145 11:17:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.145 11:17:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.145 11:17:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.145 11:17:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.145 11:17:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.145 11:17:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.145 11:17:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.145 11:17:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.145 11:17:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.145 11:17:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:04.145 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.145 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.145 nvme0n1 00:30:04.145 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.145 11:17:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.145 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.145 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.145 11:17:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:04.404 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.404 11:17:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.404 11:17:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.404 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.404 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.404 11:17:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:04.404 11:17:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:04.404 11:17:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:04.404 11:17:32 -- host/auth.sh@44 -- # digest=sha256 00:30:04.404 11:17:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:04.404 11:17:32 -- host/auth.sh@44 -- # keyid=1 00:30:04.404 11:17:32 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:04.404 11:17:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:04.404 11:17:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:04.404 11:17:32 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:04.404 11:17:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:30:04.404 11:17:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.404 11:17:32 -- host/auth.sh@68 -- # digest=sha256 00:30:04.404 11:17:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:04.404 11:17:32 -- host/auth.sh@68 -- # keyid=1 00:30:04.404 11:17:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:04.404 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.404 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.404 11:17:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.404 11:17:32 -- nvmf/common.sh@717 -- # local ip 00:30:04.404 11:17:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.404 11:17:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.404 11:17:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.404 11:17:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.404 11:17:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.404 11:17:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.404 11:17:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.404 11:17:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.404 11:17:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.404 11:17:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:04.404 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.404 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 nvme0n1 00:30:04.404 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.404 11:17:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.404 11:17:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.404 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 11:17:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:04.404 11:17:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.404 11:17:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.404 11:17:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.404 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.404 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.404 11:17:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:04.662 11:17:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:04.662 11:17:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:04.662 11:17:33 -- host/auth.sh@44 -- # digest=sha256 00:30:04.662 11:17:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:04.662 11:17:33 -- host/auth.sh@44 -- # keyid=2 00:30:04.662 11:17:33 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:04.662 11:17:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:04.662 11:17:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:04.662 11:17:33 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:04.662 11:17:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:30:04.662 11:17:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.662 11:17:33 -- host/auth.sh@68 -- # digest=sha256 00:30:04.662 11:17:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:04.662 11:17:33 -- host/auth.sh@68 -- # keyid=2 00:30:04.662 11:17:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:04.662 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.662 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.662 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.662 11:17:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.662 11:17:33 -- nvmf/common.sh@717 -- # local ip 00:30:04.662 11:17:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.662 11:17:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.662 11:17:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.662 11:17:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.662 11:17:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.662 11:17:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.662 11:17:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.662 11:17:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.662 11:17:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.662 11:17:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:04.662 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.662 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.662 nvme0n1 00:30:04.662 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.662 11:17:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.662 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.662 11:17:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:04.662 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.662 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.662 11:17:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.662 11:17:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.662 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.662 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.662 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.662 11:17:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:04.662 11:17:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:04.662 11:17:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:04.662 11:17:33 -- host/auth.sh@44 -- # digest=sha256 00:30:04.662 11:17:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:04.662 11:17:33 -- host/auth.sh@44 -- # keyid=3 00:30:04.662 11:17:33 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:04.662 11:17:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:04.662 11:17:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:04.662 11:17:33 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:04.662 11:17:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:30:04.662 11:17:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.662 11:17:33 -- host/auth.sh@68 -- # digest=sha256 00:30:04.662 11:17:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:04.662 11:17:33 -- host/auth.sh@68 -- # keyid=3 00:30:04.662 11:17:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:04.662 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.662 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.662 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.662 11:17:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.662 11:17:33 -- nvmf/common.sh@717 -- # local ip 00:30:04.662 11:17:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.662 11:17:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.662 11:17:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.662 11:17:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.662 11:17:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.662 11:17:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.662 11:17:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.662 11:17:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.662 11:17:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.662 11:17:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:04.662 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.662 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.920 nvme0n1 00:30:04.920 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.920 11:17:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.920 11:17:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:04.920 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.920 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.920 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.920 11:17:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.920 11:17:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.920 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.920 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.920 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.920 11:17:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:04.920 11:17:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:04.920 11:17:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:04.920 11:17:33 -- host/auth.sh@44 -- # digest=sha256 00:30:04.920 11:17:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:04.920 11:17:33 -- host/auth.sh@44 -- # keyid=4 00:30:04.920 11:17:33 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:04.920 11:17:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:04.920 11:17:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:04.920 11:17:33 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:04.920 11:17:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:30:04.920 11:17:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:04.920 11:17:33 -- host/auth.sh@68 -- # digest=sha256 00:30:04.920 11:17:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:04.920 11:17:33 -- host/auth.sh@68 -- # keyid=4 00:30:04.920 11:17:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:04.920 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.920 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:04.920 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:04.920 11:17:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:04.920 11:17:33 -- nvmf/common.sh@717 -- # local ip 00:30:04.920 11:17:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:04.920 11:17:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:04.920 11:17:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.920 11:17:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.920 11:17:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:04.920 11:17:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.920 11:17:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:04.920 11:17:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:04.920 11:17:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:04.920 11:17:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:04.920 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:04.920 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:05.177 nvme0n1 00:30:05.177 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.177 11:17:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.177 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.177 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:05.177 11:17:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:05.177 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.177 11:17:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.177 11:17:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.177 11:17:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.177 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:30:05.177 11:17:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.177 11:17:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:05.177 11:17:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:05.177 11:17:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:05.177 11:17:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:05.177 11:17:33 -- host/auth.sh@44 -- # digest=sha256 00:30:05.177 11:17:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:05.177 11:17:33 -- host/auth.sh@44 -- # keyid=0 00:30:05.177 11:17:33 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:05.177 11:17:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:05.177 11:17:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:05.745 11:17:34 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:05.745 11:17:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:30:05.745 11:17:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:05.745 11:17:34 -- host/auth.sh@68 -- # digest=sha256 00:30:05.745 11:17:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:05.745 11:17:34 -- host/auth.sh@68 -- # keyid=0 00:30:05.745 11:17:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:05.745 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.745 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:05.745 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.745 11:17:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:05.745 11:17:34 -- nvmf/common.sh@717 -- # local ip 00:30:05.745 11:17:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:05.745 11:17:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:05.745 11:17:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.745 11:17:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.745 11:17:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:05.745 11:17:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.745 11:17:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:05.745 11:17:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:05.745 11:17:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:05.745 11:17:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:05.745 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.745 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.004 nvme0n1 00:30:06.004 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.004 11:17:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.004 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.004 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.004 11:17:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:06.004 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.004 11:17:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.004 11:17:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.004 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.004 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.004 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.004 11:17:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:06.004 11:17:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:06.004 11:17:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:06.004 11:17:34 -- host/auth.sh@44 -- # digest=sha256 00:30:06.004 11:17:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.004 11:17:34 -- host/auth.sh@44 -- # keyid=1 00:30:06.004 11:17:34 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:06.004 11:17:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:06.004 11:17:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:06.004 11:17:34 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:06.004 11:17:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:30:06.004 11:17:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:06.004 11:17:34 -- host/auth.sh@68 -- # digest=sha256 00:30:06.004 11:17:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:06.004 11:17:34 -- host/auth.sh@68 -- # keyid=1 00:30:06.004 11:17:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:06.004 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.004 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.004 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.004 11:17:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:06.004 11:17:34 -- nvmf/common.sh@717 -- # local ip 00:30:06.004 11:17:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:06.004 11:17:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:06.004 11:17:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.004 11:17:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.004 11:17:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:06.004 11:17:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.004 11:17:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:06.004 11:17:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:06.004 11:17:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:06.004 11:17:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:06.004 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.004 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.263 nvme0n1 00:30:06.263 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.263 11:17:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.263 11:17:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:06.263 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.263 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.263 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.263 11:17:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.263 11:17:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.263 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.263 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.263 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.263 11:17:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:06.263 11:17:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:06.263 11:17:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:06.263 11:17:34 -- host/auth.sh@44 -- # digest=sha256 00:30:06.263 11:17:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.263 11:17:34 -- host/auth.sh@44 -- # keyid=2 00:30:06.263 11:17:34 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:06.263 11:17:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:06.263 11:17:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:06.263 11:17:34 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:06.263 11:17:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:30:06.263 11:17:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:06.263 11:17:34 -- host/auth.sh@68 -- # digest=sha256 00:30:06.263 11:17:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:06.263 11:17:34 -- host/auth.sh@68 -- # keyid=2 00:30:06.263 11:17:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:06.263 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.263 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.263 11:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.263 11:17:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:06.263 11:17:34 -- nvmf/common.sh@717 -- # local ip 00:30:06.263 11:17:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:06.263 11:17:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:06.263 11:17:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.263 11:17:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.263 11:17:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:06.263 11:17:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.263 11:17:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:06.263 11:17:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:06.263 11:17:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:06.263 11:17:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:06.263 11:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.263 11:17:34 -- common/autotest_common.sh@10 -- # set +x 00:30:06.522 nvme0n1 00:30:06.522 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.522 11:17:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.522 11:17:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:06.522 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.522 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:06.522 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.522 11:17:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.522 11:17:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.522 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.522 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:06.522 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.522 11:17:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:06.522 11:17:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:06.522 11:17:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:06.522 11:17:35 -- host/auth.sh@44 -- # digest=sha256 00:30:06.522 11:17:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.522 11:17:35 -- host/auth.sh@44 -- # keyid=3 00:30:06.522 11:17:35 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:06.522 11:17:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:06.522 11:17:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:06.522 11:17:35 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:06.522 11:17:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:30:06.522 11:17:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:06.522 11:17:35 -- host/auth.sh@68 -- # digest=sha256 00:30:06.522 11:17:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:06.522 11:17:35 -- host/auth.sh@68 -- # keyid=3 00:30:06.522 11:17:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:06.522 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.522 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:06.522 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.522 11:17:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:06.522 11:17:35 -- nvmf/common.sh@717 -- # local ip 00:30:06.522 11:17:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:06.522 11:17:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:06.522 11:17:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.522 11:17:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.522 11:17:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:06.522 11:17:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.522 11:17:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:06.522 11:17:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:06.522 11:17:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:06.522 11:17:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:06.522 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.522 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:06.779 nvme0n1 00:30:06.779 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.779 11:17:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.779 11:17:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:06.779 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.779 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:06.779 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.779 11:17:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.779 11:17:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.779 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.779 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:06.779 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.779 11:17:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:06.779 11:17:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:06.779 11:17:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:06.779 11:17:35 -- host/auth.sh@44 -- # digest=sha256 00:30:06.779 11:17:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.779 11:17:35 -- host/auth.sh@44 -- # keyid=4 00:30:06.779 11:17:35 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:06.779 11:17:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:06.779 11:17:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:06.779 11:17:35 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:06.779 11:17:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:30:06.780 11:17:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:06.780 11:17:35 -- host/auth.sh@68 -- # digest=sha256 00:30:06.780 11:17:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:06.780 11:17:35 -- host/auth.sh@68 -- # keyid=4 00:30:06.780 11:17:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:06.780 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.780 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:06.780 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.780 11:17:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:06.780 11:17:35 -- nvmf/common.sh@717 -- # local ip 00:30:06.780 11:17:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:06.780 11:17:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:06.780 11:17:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.780 11:17:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.780 11:17:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:06.780 11:17:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.780 11:17:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:06.780 11:17:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:06.780 11:17:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:06.780 11:17:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:06.780 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.780 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:07.037 nvme0n1 00:30:07.037 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.037 11:17:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.037 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.037 11:17:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:07.037 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:07.037 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.037 11:17:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.037 11:17:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.037 11:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.037 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:30:07.037 11:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.037 11:17:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.037 11:17:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:07.037 11:17:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:07.037 11:17:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:07.037 11:17:35 -- host/auth.sh@44 -- # digest=sha256 00:30:07.037 11:17:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:07.037 11:17:35 -- host/auth.sh@44 -- # keyid=0 00:30:07.037 11:17:35 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:07.037 11:17:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:07.037 11:17:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:08.983 11:17:37 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:08.984 11:17:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:30:08.984 11:17:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:08.984 11:17:37 -- host/auth.sh@68 -- # digest=sha256 00:30:08.984 11:17:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:08.984 11:17:37 -- host/auth.sh@68 -- # keyid=0 00:30:08.984 11:17:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:08.984 11:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:08.984 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:30:08.984 11:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:08.984 11:17:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:08.984 11:17:37 -- nvmf/common.sh@717 -- # local ip 00:30:08.984 11:17:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:08.984 11:17:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:08.984 11:17:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.984 11:17:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.984 11:17:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:08.984 11:17:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.984 11:17:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:08.984 11:17:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:08.984 11:17:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:08.984 11:17:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:08.984 11:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:08.984 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:30:09.242 nvme0n1 00:30:09.242 11:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.242 11:17:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.242 11:17:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:09.242 11:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.242 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:30:09.501 11:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.501 11:17:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.501 11:17:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.501 11:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.501 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:30:09.501 11:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.501 11:17:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:09.501 11:17:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:09.501 11:17:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:09.501 11:17:37 -- host/auth.sh@44 -- # digest=sha256 00:30:09.501 11:17:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.501 11:17:37 -- host/auth.sh@44 -- # keyid=1 00:30:09.501 11:17:37 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:09.501 11:17:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:09.501 11:17:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:09.501 11:17:37 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:09.501 11:17:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:30:09.501 11:17:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:09.501 11:17:37 -- host/auth.sh@68 -- # digest=sha256 00:30:09.501 11:17:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:09.501 11:17:37 -- host/auth.sh@68 -- # keyid=1 00:30:09.501 11:17:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:09.501 11:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.501 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:30:09.501 11:17:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.501 11:17:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:09.501 11:17:37 -- nvmf/common.sh@717 -- # local ip 00:30:09.501 11:17:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:09.501 11:17:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:09.501 11:17:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.501 11:17:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.502 11:17:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:09.502 11:17:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.502 11:17:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:09.502 11:17:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:09.502 11:17:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:09.502 11:17:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:09.502 11:17:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.502 11:17:37 -- common/autotest_common.sh@10 -- # set +x 00:30:09.758 nvme0n1 00:30:09.758 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.758 11:17:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.758 11:17:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:09.758 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.758 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:09.759 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.759 11:17:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.759 11:17:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.759 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.759 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:09.759 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.759 11:17:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:09.759 11:17:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:09.759 11:17:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:09.759 11:17:38 -- host/auth.sh@44 -- # digest=sha256 00:30:09.759 11:17:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.759 11:17:38 -- host/auth.sh@44 -- # keyid=2 00:30:09.759 11:17:38 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:09.759 11:17:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:09.759 11:17:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:09.759 11:17:38 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:09.759 11:17:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:30:09.759 11:17:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:09.759 11:17:38 -- host/auth.sh@68 -- # digest=sha256 00:30:09.759 11:17:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:09.759 11:17:38 -- host/auth.sh@68 -- # keyid=2 00:30:09.759 11:17:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:09.759 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.759 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:10.016 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.016 11:17:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:10.016 11:17:38 -- nvmf/common.sh@717 -- # local ip 00:30:10.016 11:17:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:10.016 11:17:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:10.016 11:17:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.016 11:17:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.016 11:17:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:10.016 11:17:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.016 11:17:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:10.016 11:17:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:10.016 11:17:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:10.016 11:17:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:10.016 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.016 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:10.281 nvme0n1 00:30:10.281 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.281 11:17:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.281 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.281 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:10.281 11:17:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:10.281 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.281 11:17:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.281 11:17:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.281 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.281 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:10.281 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.281 11:17:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:10.281 11:17:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:10.281 11:17:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:10.281 11:17:38 -- host/auth.sh@44 -- # digest=sha256 00:30:10.281 11:17:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:10.281 11:17:38 -- host/auth.sh@44 -- # keyid=3 00:30:10.281 11:17:38 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:10.281 11:17:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:10.281 11:17:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:10.281 11:17:38 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:10.281 11:17:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:30:10.281 11:17:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:10.281 11:17:38 -- host/auth.sh@68 -- # digest=sha256 00:30:10.281 11:17:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:10.281 11:17:38 -- host/auth.sh@68 -- # keyid=3 00:30:10.281 11:17:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:10.281 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.281 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:10.281 11:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.281 11:17:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:10.281 11:17:38 -- nvmf/common.sh@717 -- # local ip 00:30:10.281 11:17:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:10.281 11:17:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:10.281 11:17:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.281 11:17:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.281 11:17:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:10.281 11:17:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.281 11:17:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:10.281 11:17:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:10.281 11:17:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:10.281 11:17:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:10.281 11:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.281 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:30:10.540 nvme0n1 00:30:10.540 11:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.540 11:17:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.540 11:17:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:10.540 11:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.540 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:30:10.798 11:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.798 11:17:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.798 11:17:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.798 11:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.798 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:30:10.798 11:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.798 11:17:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:10.798 11:17:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:10.798 11:17:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:10.798 11:17:39 -- host/auth.sh@44 -- # digest=sha256 00:30:10.798 11:17:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:10.798 11:17:39 -- host/auth.sh@44 -- # keyid=4 00:30:10.798 11:17:39 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:10.798 11:17:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:10.798 11:17:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:10.798 11:17:39 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:10.798 11:17:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:30:10.798 11:17:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:10.798 11:17:39 -- host/auth.sh@68 -- # digest=sha256 00:30:10.798 11:17:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:10.798 11:17:39 -- host/auth.sh@68 -- # keyid=4 00:30:10.798 11:17:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:10.798 11:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.798 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:30:10.798 11:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.798 11:17:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:10.798 11:17:39 -- nvmf/common.sh@717 -- # local ip 00:30:10.798 11:17:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:10.798 11:17:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:10.798 11:17:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.798 11:17:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.798 11:17:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:10.798 11:17:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.798 11:17:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:10.798 11:17:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:10.798 11:17:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:10.798 11:17:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:10.798 11:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.798 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:30:11.056 nvme0n1 00:30:11.056 11:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.056 11:17:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.056 11:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.056 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:30:11.056 11:17:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:11.056 11:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.056 11:17:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.056 11:17:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.056 11:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.056 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:30:11.056 11:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.056 11:17:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:11.056 11:17:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:11.056 11:17:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:11.056 11:17:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:11.056 11:17:39 -- host/auth.sh@44 -- # digest=sha256 00:30:11.056 11:17:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:11.056 11:17:39 -- host/auth.sh@44 -- # keyid=0 00:30:11.056 11:17:39 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:11.056 11:17:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:11.056 11:17:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:15.240 11:17:43 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:15.240 11:17:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:30:15.240 11:17:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:15.240 11:17:43 -- host/auth.sh@68 -- # digest=sha256 00:30:15.240 11:17:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:15.240 11:17:43 -- host/auth.sh@68 -- # keyid=0 00:30:15.240 11:17:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:15.240 11:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.240 11:17:43 -- common/autotest_common.sh@10 -- # set +x 00:30:15.240 11:17:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.240 11:17:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:15.240 11:17:43 -- nvmf/common.sh@717 -- # local ip 00:30:15.240 11:17:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:15.240 11:17:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:15.240 11:17:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.240 11:17:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.240 11:17:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:15.240 11:17:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.240 11:17:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:15.240 11:17:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:15.240 11:17:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:15.240 11:17:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:15.240 11:17:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.240 11:17:43 -- common/autotest_common.sh@10 -- # set +x 00:30:15.500 nvme0n1 00:30:15.500 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.500 11:17:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:15.500 11:17:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.500 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.500 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:15.500 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.500 11:17:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.500 11:17:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.500 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.500 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:15.500 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.500 11:17:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:15.500 11:17:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:15.500 11:17:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:15.500 11:17:44 -- host/auth.sh@44 -- # digest=sha256 00:30:15.500 11:17:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:15.500 11:17:44 -- host/auth.sh@44 -- # keyid=1 00:30:15.500 11:17:44 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:15.500 11:17:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:15.500 11:17:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:15.500 11:17:44 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:15.500 11:17:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:30:15.500 11:17:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:15.500 11:17:44 -- host/auth.sh@68 -- # digest=sha256 00:30:15.500 11:17:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:15.500 11:17:44 -- host/auth.sh@68 -- # keyid=1 00:30:15.500 11:17:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:15.500 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.500 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:15.500 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.500 11:17:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:15.500 11:17:44 -- nvmf/common.sh@717 -- # local ip 00:30:15.500 11:17:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:15.500 11:17:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:15.500 11:17:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.500 11:17:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.500 11:17:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:15.500 11:17:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.500 11:17:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:15.500 11:17:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:15.500 11:17:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:15.500 11:17:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:15.500 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.500 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:16.435 nvme0n1 00:30:16.435 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.435 11:17:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.435 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.435 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:16.436 11:17:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:16.436 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.436 11:17:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.436 11:17:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.436 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.436 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:16.436 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.436 11:17:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:16.436 11:17:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:16.436 11:17:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:16.436 11:17:44 -- host/auth.sh@44 -- # digest=sha256 00:30:16.436 11:17:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:16.436 11:17:44 -- host/auth.sh@44 -- # keyid=2 00:30:16.436 11:17:44 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:16.436 11:17:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:16.436 11:17:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:16.436 11:17:44 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:16.436 11:17:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:30:16.436 11:17:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:16.436 11:17:44 -- host/auth.sh@68 -- # digest=sha256 00:30:16.436 11:17:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:16.436 11:17:44 -- host/auth.sh@68 -- # keyid=2 00:30:16.436 11:17:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:16.436 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.436 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:16.436 11:17:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.436 11:17:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:16.436 11:17:44 -- nvmf/common.sh@717 -- # local ip 00:30:16.436 11:17:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:16.436 11:17:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:16.436 11:17:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.436 11:17:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.436 11:17:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:16.436 11:17:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.436 11:17:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:16.436 11:17:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:16.436 11:17:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:16.436 11:17:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:16.436 11:17:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.436 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:17.001 nvme0n1 00:30:17.001 11:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.001 11:17:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.002 11:17:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:17.002 11:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.002 11:17:45 -- common/autotest_common.sh@10 -- # set +x 00:30:17.002 11:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.002 11:17:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.002 11:17:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.002 11:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.002 11:17:45 -- common/autotest_common.sh@10 -- # set +x 00:30:17.002 11:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.002 11:17:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:17.002 11:17:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:17.002 11:17:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:17.002 11:17:45 -- host/auth.sh@44 -- # digest=sha256 00:30:17.002 11:17:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:17.002 11:17:45 -- host/auth.sh@44 -- # keyid=3 00:30:17.002 11:17:45 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:17.002 11:17:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:17.002 11:17:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:17.002 11:17:45 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:17.002 11:17:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:30:17.002 11:17:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:17.002 11:17:45 -- host/auth.sh@68 -- # digest=sha256 00:30:17.002 11:17:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:17.002 11:17:45 -- host/auth.sh@68 -- # keyid=3 00:30:17.002 11:17:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:17.002 11:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.002 11:17:45 -- common/autotest_common.sh@10 -- # set +x 00:30:17.002 11:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.002 11:17:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:17.002 11:17:45 -- nvmf/common.sh@717 -- # local ip 00:30:17.002 11:17:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:17.002 11:17:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:17.002 11:17:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.002 11:17:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.002 11:17:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:17.002 11:17:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.002 11:17:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:17.002 11:17:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:17.002 11:17:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:17.002 11:17:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:17.002 11:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.002 11:17:45 -- common/autotest_common.sh@10 -- # set +x 00:30:17.574 nvme0n1 00:30:17.574 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.574 11:17:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.574 11:17:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:17.574 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.574 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:17.574 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.574 11:17:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.574 11:17:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.574 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.574 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:17.574 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.574 11:17:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:17.574 11:17:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:17.574 11:17:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:17.574 11:17:46 -- host/auth.sh@44 -- # digest=sha256 00:30:17.574 11:17:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:17.574 11:17:46 -- host/auth.sh@44 -- # keyid=4 00:30:17.574 11:17:46 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:17.574 11:17:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:17.574 11:17:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:17.574 11:17:46 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:17.574 11:17:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:30:17.574 11:17:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:17.574 11:17:46 -- host/auth.sh@68 -- # digest=sha256 00:30:17.574 11:17:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:17.574 11:17:46 -- host/auth.sh@68 -- # keyid=4 00:30:17.574 11:17:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:17.574 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.574 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:17.574 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.574 11:17:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:17.574 11:17:46 -- nvmf/common.sh@717 -- # local ip 00:30:17.574 11:17:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:17.574 11:17:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:17.574 11:17:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.574 11:17:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.574 11:17:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:17.574 11:17:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.574 11:17:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:17.574 11:17:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:17.574 11:17:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:17.574 11:17:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:17.574 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.574 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.140 nvme0n1 00:30:18.140 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.140 11:17:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.140 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.140 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.140 11:17:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:18.140 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.400 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.400 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:18.400 11:17:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:18.400 11:17:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:18.400 11:17:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:18.400 11:17:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:18.400 11:17:46 -- host/auth.sh@44 -- # digest=sha384 00:30:18.400 11:17:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:18.400 11:17:46 -- host/auth.sh@44 -- # keyid=0 00:30:18.400 11:17:46 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:18.400 11:17:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:18.400 11:17:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:18.400 11:17:46 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:18.400 11:17:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:30:18.400 11:17:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:18.400 11:17:46 -- host/auth.sh@68 -- # digest=sha384 00:30:18.400 11:17:46 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:18.400 11:17:46 -- host/auth.sh@68 -- # keyid=0 00:30:18.400 11:17:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:18.400 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.400 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:18.400 11:17:46 -- nvmf/common.sh@717 -- # local ip 00:30:18.400 11:17:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:18.400 11:17:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:18.400 11:17:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.400 11:17:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.400 11:17:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:18.400 11:17:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.400 11:17:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:18.400 11:17:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:18.400 11:17:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:18.400 11:17:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:18.400 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.400 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 nvme0n1 00:30:18.400 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.400 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.400 11:17:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:18.400 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.400 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.400 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:18.400 11:17:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:18.400 11:17:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:18.400 11:17:46 -- host/auth.sh@44 -- # digest=sha384 00:30:18.400 11:17:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:18.400 11:17:46 -- host/auth.sh@44 -- # keyid=1 00:30:18.400 11:17:46 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:18.400 11:17:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:18.400 11:17:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:18.400 11:17:46 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:18.400 11:17:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:30:18.400 11:17:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:18.400 11:17:46 -- host/auth.sh@68 -- # digest=sha384 00:30:18.400 11:17:46 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:18.400 11:17:46 -- host/auth.sh@68 -- # keyid=1 00:30:18.400 11:17:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:18.400 11:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.400 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 11:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.400 11:17:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:18.400 11:17:46 -- nvmf/common.sh@717 -- # local ip 00:30:18.400 11:17:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:18.400 11:17:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:18.400 11:17:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.400 11:17:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.400 11:17:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:18.400 11:17:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.400 11:17:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:18.400 11:17:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:18.400 11:17:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:18.400 11:17:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:18.400 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.400 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.658 nvme0n1 00:30:18.658 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.658 11:17:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.658 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.658 11:17:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:18.658 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.658 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.658 11:17:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.658 11:17:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.658 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.658 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.658 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.658 11:17:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:18.658 11:17:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:18.658 11:17:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:18.658 11:17:47 -- host/auth.sh@44 -- # digest=sha384 00:30:18.658 11:17:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:18.658 11:17:47 -- host/auth.sh@44 -- # keyid=2 00:30:18.658 11:17:47 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:18.658 11:17:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:18.658 11:17:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:18.658 11:17:47 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:18.658 11:17:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:30:18.658 11:17:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:18.658 11:17:47 -- host/auth.sh@68 -- # digest=sha384 00:30:18.658 11:17:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:18.658 11:17:47 -- host/auth.sh@68 -- # keyid=2 00:30:18.658 11:17:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:18.658 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.658 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.658 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.658 11:17:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:18.658 11:17:47 -- nvmf/common.sh@717 -- # local ip 00:30:18.658 11:17:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:18.658 11:17:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:18.658 11:17:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.658 11:17:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.658 11:17:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:18.658 11:17:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.658 11:17:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:18.658 11:17:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:18.658 11:17:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:18.659 11:17:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:18.659 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.659 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.659 nvme0n1 00:30:18.659 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.659 11:17:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.659 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.659 11:17:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:18.659 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.659 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.916 11:17:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.916 11:17:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.916 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.917 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.917 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.917 11:17:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:18.917 11:17:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:18.917 11:17:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:18.917 11:17:47 -- host/auth.sh@44 -- # digest=sha384 00:30:18.917 11:17:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:18.917 11:17:47 -- host/auth.sh@44 -- # keyid=3 00:30:18.917 11:17:47 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:18.917 11:17:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:18.917 11:17:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:18.917 11:17:47 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:18.917 11:17:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:30:18.917 11:17:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:18.917 11:17:47 -- host/auth.sh@68 -- # digest=sha384 00:30:18.917 11:17:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:18.917 11:17:47 -- host/auth.sh@68 -- # keyid=3 00:30:18.917 11:17:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:18.917 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.917 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.917 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.917 11:17:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:18.917 11:17:47 -- nvmf/common.sh@717 -- # local ip 00:30:18.917 11:17:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:18.917 11:17:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:18.917 11:17:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.917 11:17:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.917 11:17:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:18.917 11:17:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.917 11:17:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:18.917 11:17:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:18.917 11:17:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:18.917 11:17:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:18.917 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.917 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.917 nvme0n1 00:30:18.917 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.917 11:17:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.917 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.917 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.917 11:17:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:18.917 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.917 11:17:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.917 11:17:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.917 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.917 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.917 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.917 11:17:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:18.917 11:17:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:18.917 11:17:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:18.917 11:17:47 -- host/auth.sh@44 -- # digest=sha384 00:30:18.917 11:17:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:18.917 11:17:47 -- host/auth.sh@44 -- # keyid=4 00:30:18.917 11:17:47 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:18.917 11:17:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:18.917 11:17:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:18.917 11:17:47 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:18.917 11:17:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:30:18.917 11:17:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:18.917 11:17:47 -- host/auth.sh@68 -- # digest=sha384 00:30:18.917 11:17:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:18.917 11:17:47 -- host/auth.sh@68 -- # keyid=4 00:30:18.917 11:17:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:18.917 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.917 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:18.917 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.917 11:17:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:18.917 11:17:47 -- nvmf/common.sh@717 -- # local ip 00:30:18.917 11:17:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:18.917 11:17:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:18.917 11:17:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.917 11:17:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.917 11:17:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:18.917 11:17:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.917 11:17:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:18.917 11:17:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:18.917 11:17:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:18.917 11:17:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:18.917 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.917 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 nvme0n1 00:30:19.176 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.176 11:17:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.176 11:17:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:19.176 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.176 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.176 11:17:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.176 11:17:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.176 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.176 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.176 11:17:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:19.176 11:17:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:19.176 11:17:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:19.176 11:17:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:19.176 11:17:47 -- host/auth.sh@44 -- # digest=sha384 00:30:19.176 11:17:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:19.176 11:17:47 -- host/auth.sh@44 -- # keyid=0 00:30:19.176 11:17:47 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:19.176 11:17:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:19.176 11:17:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:19.176 11:17:47 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:19.176 11:17:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:30:19.176 11:17:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:19.176 11:17:47 -- host/auth.sh@68 -- # digest=sha384 00:30:19.176 11:17:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:19.176 11:17:47 -- host/auth.sh@68 -- # keyid=0 00:30:19.176 11:17:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:19.176 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.176 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.176 11:17:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:19.176 11:17:47 -- nvmf/common.sh@717 -- # local ip 00:30:19.176 11:17:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:19.176 11:17:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:19.176 11:17:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.176 11:17:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.176 11:17:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:19.176 11:17:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.176 11:17:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:19.176 11:17:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:19.176 11:17:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:19.176 11:17:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:19.176 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.176 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.434 nvme0n1 00:30:19.434 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.434 11:17:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.434 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.434 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.434 11:17:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:19.434 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.434 11:17:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.434 11:17:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.434 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.434 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.434 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.434 11:17:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:19.434 11:17:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:19.434 11:17:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:19.434 11:17:47 -- host/auth.sh@44 -- # digest=sha384 00:30:19.434 11:17:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:19.434 11:17:47 -- host/auth.sh@44 -- # keyid=1 00:30:19.434 11:17:47 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:19.434 11:17:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:19.434 11:17:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:19.434 11:17:47 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:19.434 11:17:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:30:19.434 11:17:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:19.434 11:17:47 -- host/auth.sh@68 -- # digest=sha384 00:30:19.434 11:17:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:19.434 11:17:47 -- host/auth.sh@68 -- # keyid=1 00:30:19.434 11:17:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:19.434 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.434 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.434 11:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.434 11:17:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:19.434 11:17:47 -- nvmf/common.sh@717 -- # local ip 00:30:19.434 11:17:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:19.434 11:17:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:19.434 11:17:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.434 11:17:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.434 11:17:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:19.434 11:17:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.434 11:17:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:19.434 11:17:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:19.434 11:17:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:19.434 11:17:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:19.434 11:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.434 11:17:47 -- common/autotest_common.sh@10 -- # set +x 00:30:19.434 nvme0n1 00:30:19.434 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.434 11:17:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.434 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.434 11:17:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:19.434 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.434 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.434 11:17:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.434 11:17:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.434 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.434 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.691 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.691 11:17:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:19.691 11:17:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:19.691 11:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:19.691 11:17:48 -- host/auth.sh@44 -- # digest=sha384 00:30:19.691 11:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:19.691 11:17:48 -- host/auth.sh@44 -- # keyid=2 00:30:19.691 11:17:48 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:19.691 11:17:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:19.691 11:17:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:19.691 11:17:48 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:19.691 11:17:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:30:19.691 11:17:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:19.691 11:17:48 -- host/auth.sh@68 -- # digest=sha384 00:30:19.691 11:17:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:19.691 11:17:48 -- host/auth.sh@68 -- # keyid=2 00:30:19.691 11:17:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:19.691 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.691 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.691 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.691 11:17:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:19.691 11:17:48 -- nvmf/common.sh@717 -- # local ip 00:30:19.691 11:17:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:19.691 11:17:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:19.691 11:17:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.691 11:17:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.691 11:17:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:19.691 11:17:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.691 11:17:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:19.691 11:17:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:19.691 11:17:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:19.691 11:17:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:19.691 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.691 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.691 nvme0n1 00:30:19.691 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.691 11:17:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.691 11:17:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:19.691 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.691 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.691 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.691 11:17:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.691 11:17:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.691 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.691 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.691 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.691 11:17:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:19.691 11:17:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:19.691 11:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:19.691 11:17:48 -- host/auth.sh@44 -- # digest=sha384 00:30:19.691 11:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:19.691 11:17:48 -- host/auth.sh@44 -- # keyid=3 00:30:19.691 11:17:48 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:19.691 11:17:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:19.691 11:17:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:19.691 11:17:48 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:19.691 11:17:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:30:19.691 11:17:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:19.691 11:17:48 -- host/auth.sh@68 -- # digest=sha384 00:30:19.691 11:17:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:19.691 11:17:48 -- host/auth.sh@68 -- # keyid=3 00:30:19.691 11:17:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:19.691 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.691 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.691 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.691 11:17:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:19.691 11:17:48 -- nvmf/common.sh@717 -- # local ip 00:30:19.692 11:17:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:19.692 11:17:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:19.692 11:17:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.692 11:17:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.692 11:17:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:19.692 11:17:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.692 11:17:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:19.692 11:17:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:19.692 11:17:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:19.692 11:17:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:19.692 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.692 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.949 nvme0n1 00:30:19.949 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.949 11:17:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:19.949 11:17:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.949 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.949 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.949 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.949 11:17:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.949 11:17:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.949 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.949 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.949 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.949 11:17:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:19.949 11:17:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:19.949 11:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:19.949 11:17:48 -- host/auth.sh@44 -- # digest=sha384 00:30:19.949 11:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:19.949 11:17:48 -- host/auth.sh@44 -- # keyid=4 00:30:19.949 11:17:48 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:19.949 11:17:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:19.949 11:17:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:19.949 11:17:48 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:19.949 11:17:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:30:19.949 11:17:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:19.949 11:17:48 -- host/auth.sh@68 -- # digest=sha384 00:30:19.949 11:17:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:19.949 11:17:48 -- host/auth.sh@68 -- # keyid=4 00:30:19.949 11:17:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:19.949 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.949 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:19.949 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.949 11:17:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:19.949 11:17:48 -- nvmf/common.sh@717 -- # local ip 00:30:19.949 11:17:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:19.949 11:17:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:19.949 11:17:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.949 11:17:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.949 11:17:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:19.950 11:17:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.950 11:17:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:19.950 11:17:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:19.950 11:17:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:19.950 11:17:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:19.950 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.950 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.208 nvme0n1 00:30:20.208 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.208 11:17:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.208 11:17:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:20.208 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.208 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.208 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.208 11:17:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.208 11:17:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.208 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.208 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.208 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.208 11:17:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:20.208 11:17:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:20.208 11:17:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:20.208 11:17:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:20.208 11:17:48 -- host/auth.sh@44 -- # digest=sha384 00:30:20.208 11:17:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:20.208 11:17:48 -- host/auth.sh@44 -- # keyid=0 00:30:20.208 11:17:48 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:20.208 11:17:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:20.208 11:17:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:20.208 11:17:48 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:20.208 11:17:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:30:20.208 11:17:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:20.208 11:17:48 -- host/auth.sh@68 -- # digest=sha384 00:30:20.208 11:17:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:20.208 11:17:48 -- host/auth.sh@68 -- # keyid=0 00:30:20.208 11:17:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:20.208 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.208 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.208 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.208 11:17:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:20.208 11:17:48 -- nvmf/common.sh@717 -- # local ip 00:30:20.208 11:17:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:20.208 11:17:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:20.208 11:17:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.208 11:17:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.208 11:17:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:20.208 11:17:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.208 11:17:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:20.208 11:17:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:20.208 11:17:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:20.208 11:17:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:20.208 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.208 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.466 nvme0n1 00:30:20.466 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.466 11:17:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.466 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.466 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.466 11:17:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:20.466 11:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.466 11:17:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.466 11:17:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.466 11:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.466 11:17:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.466 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.466 11:17:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:20.466 11:17:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:20.466 11:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:20.466 11:17:49 -- host/auth.sh@44 -- # digest=sha384 00:30:20.466 11:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:20.466 11:17:49 -- host/auth.sh@44 -- # keyid=1 00:30:20.466 11:17:49 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:20.466 11:17:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:20.466 11:17:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:20.466 11:17:49 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:20.466 11:17:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:30:20.466 11:17:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:20.467 11:17:49 -- host/auth.sh@68 -- # digest=sha384 00:30:20.467 11:17:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:20.467 11:17:49 -- host/auth.sh@68 -- # keyid=1 00:30:20.467 11:17:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:20.467 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.467 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.467 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.467 11:17:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:20.467 11:17:49 -- nvmf/common.sh@717 -- # local ip 00:30:20.467 11:17:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:20.467 11:17:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:20.467 11:17:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.467 11:17:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.467 11:17:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:20.467 11:17:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.467 11:17:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:20.467 11:17:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:20.467 11:17:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:20.467 11:17:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:20.467 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.467 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.726 nvme0n1 00:30:20.726 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.726 11:17:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.726 11:17:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:20.726 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.726 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.726 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.726 11:17:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.726 11:17:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.726 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.726 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.726 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.726 11:17:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:20.726 11:17:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:20.726 11:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:20.726 11:17:49 -- host/auth.sh@44 -- # digest=sha384 00:30:20.726 11:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:20.726 11:17:49 -- host/auth.sh@44 -- # keyid=2 00:30:20.726 11:17:49 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:20.726 11:17:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:20.726 11:17:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:20.726 11:17:49 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:20.726 11:17:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:30:20.726 11:17:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:20.726 11:17:49 -- host/auth.sh@68 -- # digest=sha384 00:30:20.726 11:17:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:20.726 11:17:49 -- host/auth.sh@68 -- # keyid=2 00:30:20.726 11:17:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:20.726 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.726 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.726 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.726 11:17:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:20.726 11:17:49 -- nvmf/common.sh@717 -- # local ip 00:30:20.726 11:17:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:20.726 11:17:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:20.726 11:17:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.726 11:17:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.726 11:17:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:20.726 11:17:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.726 11:17:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:20.726 11:17:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:20.726 11:17:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:20.726 11:17:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:20.726 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.726 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.986 nvme0n1 00:30:20.986 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.986 11:17:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.986 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.986 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.986 11:17:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:20.986 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.986 11:17:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.986 11:17:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.986 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.986 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.986 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.986 11:17:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:20.986 11:17:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:20.986 11:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:20.986 11:17:49 -- host/auth.sh@44 -- # digest=sha384 00:30:20.986 11:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:20.986 11:17:49 -- host/auth.sh@44 -- # keyid=3 00:30:20.986 11:17:49 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:20.986 11:17:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:20.986 11:17:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:20.986 11:17:49 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:20.986 11:17:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:30:20.986 11:17:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:20.986 11:17:49 -- host/auth.sh@68 -- # digest=sha384 00:30:20.986 11:17:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:20.986 11:17:49 -- host/auth.sh@68 -- # keyid=3 00:30:20.986 11:17:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:20.986 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.986 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:20.986 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.986 11:17:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:20.986 11:17:49 -- nvmf/common.sh@717 -- # local ip 00:30:20.986 11:17:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:20.986 11:17:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:20.986 11:17:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.986 11:17:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.986 11:17:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:20.986 11:17:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.986 11:17:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:20.986 11:17:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:20.986 11:17:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:20.986 11:17:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:20.986 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.986 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:21.244 nvme0n1 00:30:21.244 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.244 11:17:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.244 11:17:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:21.244 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.244 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:21.244 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.244 11:17:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.244 11:17:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.244 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.244 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:21.244 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.244 11:17:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:21.244 11:17:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:21.244 11:17:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:21.244 11:17:49 -- host/auth.sh@44 -- # digest=sha384 00:30:21.244 11:17:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:21.244 11:17:49 -- host/auth.sh@44 -- # keyid=4 00:30:21.244 11:17:49 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:21.244 11:17:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:21.244 11:17:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:21.244 11:17:49 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:21.244 11:17:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:30:21.244 11:17:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:21.244 11:17:49 -- host/auth.sh@68 -- # digest=sha384 00:30:21.244 11:17:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:21.244 11:17:49 -- host/auth.sh@68 -- # keyid=4 00:30:21.244 11:17:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:21.244 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.244 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:21.244 11:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.244 11:17:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:21.244 11:17:49 -- nvmf/common.sh@717 -- # local ip 00:30:21.244 11:17:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:21.244 11:17:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:21.244 11:17:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.244 11:17:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.244 11:17:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:21.244 11:17:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.244 11:17:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:21.244 11:17:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:21.244 11:17:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:21.244 11:17:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:21.244 11:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.244 11:17:49 -- common/autotest_common.sh@10 -- # set +x 00:30:21.503 nvme0n1 00:30:21.503 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.503 11:17:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.503 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.503 11:17:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:21.503 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:21.503 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.503 11:17:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.503 11:17:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.503 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.503 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:21.503 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.503 11:17:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:21.503 11:17:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:21.503 11:17:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:21.503 11:17:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:21.503 11:17:50 -- host/auth.sh@44 -- # digest=sha384 00:30:21.503 11:17:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:21.503 11:17:50 -- host/auth.sh@44 -- # keyid=0 00:30:21.503 11:17:50 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:21.503 11:17:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:21.503 11:17:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:21.503 11:17:50 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:21.503 11:17:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:30:21.503 11:17:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:21.503 11:17:50 -- host/auth.sh@68 -- # digest=sha384 00:30:21.503 11:17:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:21.503 11:17:50 -- host/auth.sh@68 -- # keyid=0 00:30:21.503 11:17:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:21.503 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.503 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:21.503 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.503 11:17:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:21.503 11:17:50 -- nvmf/common.sh@717 -- # local ip 00:30:21.503 11:17:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:21.503 11:17:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:21.503 11:17:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.503 11:17:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.503 11:17:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:21.503 11:17:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.503 11:17:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:21.503 11:17:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:21.503 11:17:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:21.503 11:17:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:21.503 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.503 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:22.070 nvme0n1 00:30:22.070 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.070 11:17:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.070 11:17:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:22.070 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.070 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:22.070 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.070 11:17:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.070 11:17:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.070 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.070 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:22.070 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.070 11:17:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:22.070 11:17:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:22.070 11:17:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:22.070 11:17:50 -- host/auth.sh@44 -- # digest=sha384 00:30:22.070 11:17:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:22.070 11:17:50 -- host/auth.sh@44 -- # keyid=1 00:30:22.070 11:17:50 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:22.070 11:17:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:22.070 11:17:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:22.070 11:17:50 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:22.070 11:17:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:30:22.070 11:17:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:22.070 11:17:50 -- host/auth.sh@68 -- # digest=sha384 00:30:22.070 11:17:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:22.070 11:17:50 -- host/auth.sh@68 -- # keyid=1 00:30:22.070 11:17:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:22.070 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.070 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:22.070 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.070 11:17:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:22.070 11:17:50 -- nvmf/common.sh@717 -- # local ip 00:30:22.070 11:17:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:22.070 11:17:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:22.070 11:17:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.070 11:17:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.070 11:17:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:22.070 11:17:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.070 11:17:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:22.070 11:17:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:22.070 11:17:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:22.070 11:17:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:22.070 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.070 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:22.328 nvme0n1 00:30:22.328 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.328 11:17:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.328 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.328 11:17:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:22.328 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:22.328 11:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.586 11:17:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.586 11:17:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.586 11:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.586 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:30:22.586 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.586 11:17:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:22.586 11:17:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:22.586 11:17:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:22.586 11:17:51 -- host/auth.sh@44 -- # digest=sha384 00:30:22.586 11:17:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:22.586 11:17:51 -- host/auth.sh@44 -- # keyid=2 00:30:22.586 11:17:51 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:22.586 11:17:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:22.586 11:17:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:22.586 11:17:51 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:22.586 11:17:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:30:22.586 11:17:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:22.586 11:17:51 -- host/auth.sh@68 -- # digest=sha384 00:30:22.586 11:17:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:22.586 11:17:51 -- host/auth.sh@68 -- # keyid=2 00:30:22.586 11:17:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:22.586 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.586 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:22.586 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.586 11:17:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:22.586 11:17:51 -- nvmf/common.sh@717 -- # local ip 00:30:22.586 11:17:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:22.586 11:17:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:22.586 11:17:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.586 11:17:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.586 11:17:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:22.586 11:17:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.586 11:17:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:22.586 11:17:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:22.586 11:17:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:22.586 11:17:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:22.586 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.586 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:22.845 nvme0n1 00:30:22.845 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.845 11:17:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.845 11:17:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:22.845 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.845 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:22.845 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.845 11:17:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.845 11:17:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.845 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.845 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:22.845 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.845 11:17:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:22.845 11:17:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:22.845 11:17:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:22.845 11:17:51 -- host/auth.sh@44 -- # digest=sha384 00:30:22.845 11:17:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:22.845 11:17:51 -- host/auth.sh@44 -- # keyid=3 00:30:22.845 11:17:51 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:22.845 11:17:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:22.845 11:17:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:22.845 11:17:51 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:22.845 11:17:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:30:22.845 11:17:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:22.845 11:17:51 -- host/auth.sh@68 -- # digest=sha384 00:30:22.845 11:17:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:22.845 11:17:51 -- host/auth.sh@68 -- # keyid=3 00:30:22.845 11:17:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:22.845 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.845 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:22.845 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.845 11:17:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:22.845 11:17:51 -- nvmf/common.sh@717 -- # local ip 00:30:22.845 11:17:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:22.845 11:17:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:22.845 11:17:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.845 11:17:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.845 11:17:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:22.845 11:17:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.845 11:17:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:22.845 11:17:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:22.845 11:17:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:22.845 11:17:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:22.845 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.845 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:23.413 nvme0n1 00:30:23.413 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.413 11:17:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.413 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.413 11:17:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:23.413 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:23.413 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.413 11:17:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.413 11:17:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.413 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.413 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:23.413 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.413 11:17:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:23.413 11:17:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:23.413 11:17:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:23.413 11:17:51 -- host/auth.sh@44 -- # digest=sha384 00:30:23.413 11:17:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:23.413 11:17:51 -- host/auth.sh@44 -- # keyid=4 00:30:23.413 11:17:51 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:23.413 11:17:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:23.413 11:17:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:23.413 11:17:51 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:23.413 11:17:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:30:23.413 11:17:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:23.413 11:17:51 -- host/auth.sh@68 -- # digest=sha384 00:30:23.413 11:17:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:23.413 11:17:51 -- host/auth.sh@68 -- # keyid=4 00:30:23.413 11:17:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:23.413 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.413 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:23.413 11:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.413 11:17:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:23.413 11:17:51 -- nvmf/common.sh@717 -- # local ip 00:30:23.413 11:17:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:23.413 11:17:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:23.413 11:17:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.413 11:17:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.413 11:17:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:23.413 11:17:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.413 11:17:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:23.413 11:17:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:23.413 11:17:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:23.413 11:17:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:23.413 11:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.413 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:30:23.671 nvme0n1 00:30:23.671 11:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.671 11:17:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.671 11:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.671 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:30:23.671 11:17:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:23.671 11:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.671 11:17:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.671 11:17:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.671 11:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.671 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:30:23.928 11:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.928 11:17:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:23.928 11:17:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:23.928 11:17:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:23.928 11:17:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:23.928 11:17:52 -- host/auth.sh@44 -- # digest=sha384 00:30:23.928 11:17:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:23.928 11:17:52 -- host/auth.sh@44 -- # keyid=0 00:30:23.928 11:17:52 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:23.928 11:17:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:23.928 11:17:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:23.928 11:17:52 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:23.928 11:17:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:30:23.928 11:17:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:23.928 11:17:52 -- host/auth.sh@68 -- # digest=sha384 00:30:23.928 11:17:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:23.928 11:17:52 -- host/auth.sh@68 -- # keyid=0 00:30:23.928 11:17:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:23.928 11:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.928 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:30:23.928 11:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:23.928 11:17:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:23.928 11:17:52 -- nvmf/common.sh@717 -- # local ip 00:30:23.928 11:17:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:23.928 11:17:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:23.928 11:17:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.928 11:17:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.928 11:17:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:23.928 11:17:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.928 11:17:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:23.928 11:17:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:23.928 11:17:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:23.928 11:17:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:23.928 11:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:23.928 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.495 nvme0n1 00:30:24.495 11:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.495 11:17:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.495 11:17:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:24.495 11:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.495 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.495 11:17:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.495 11:17:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.495 11:17:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.495 11:17:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.495 11:17:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.495 11:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.495 11:17:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:24.495 11:17:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:24.495 11:17:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:24.495 11:17:53 -- host/auth.sh@44 -- # digest=sha384 00:30:24.495 11:17:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:24.495 11:17:53 -- host/auth.sh@44 -- # keyid=1 00:30:24.495 11:17:53 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:24.495 11:17:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:24.495 11:17:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:24.495 11:17:53 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:24.495 11:17:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:30:24.495 11:17:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:24.495 11:17:53 -- host/auth.sh@68 -- # digest=sha384 00:30:24.495 11:17:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:24.495 11:17:53 -- host/auth.sh@68 -- # keyid=1 00:30:24.495 11:17:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:24.495 11:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.495 11:17:53 -- common/autotest_common.sh@10 -- # set +x 00:30:24.495 11:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:24.495 11:17:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:24.495 11:17:53 -- nvmf/common.sh@717 -- # local ip 00:30:24.495 11:17:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:24.495 11:17:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:24.495 11:17:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.495 11:17:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.495 11:17:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:24.495 11:17:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.495 11:17:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:24.495 11:17:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:24.495 11:17:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:24.495 11:17:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:24.495 11:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:24.495 11:17:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.060 nvme0n1 00:30:25.060 11:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.060 11:17:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.060 11:17:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:25.060 11:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.060 11:17:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.060 11:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.318 11:17:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.318 11:17:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.318 11:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.318 11:17:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.318 11:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.318 11:17:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:25.318 11:17:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:25.318 11:17:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:25.318 11:17:53 -- host/auth.sh@44 -- # digest=sha384 00:30:25.318 11:17:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:25.318 11:17:53 -- host/auth.sh@44 -- # keyid=2 00:30:25.318 11:17:53 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:25.318 11:17:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:25.318 11:17:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:25.318 11:17:53 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:25.318 11:17:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:30:25.318 11:17:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:25.318 11:17:53 -- host/auth.sh@68 -- # digest=sha384 00:30:25.318 11:17:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:25.318 11:17:53 -- host/auth.sh@68 -- # keyid=2 00:30:25.318 11:17:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:25.318 11:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.318 11:17:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.318 11:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.318 11:17:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:25.318 11:17:53 -- nvmf/common.sh@717 -- # local ip 00:30:25.318 11:17:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:25.318 11:17:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:25.318 11:17:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.318 11:17:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.318 11:17:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:25.318 11:17:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.318 11:17:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:25.318 11:17:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:25.318 11:17:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:25.318 11:17:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:25.318 11:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.318 11:17:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.884 nvme0n1 00:30:25.884 11:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.884 11:17:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.884 11:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.884 11:17:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:25.884 11:17:54 -- common/autotest_common.sh@10 -- # set +x 00:30:25.884 11:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.884 11:17:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.884 11:17:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.884 11:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.884 11:17:54 -- common/autotest_common.sh@10 -- # set +x 00:30:25.884 11:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.884 11:17:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:25.884 11:17:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:25.884 11:17:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:25.884 11:17:54 -- host/auth.sh@44 -- # digest=sha384 00:30:25.884 11:17:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:25.884 11:17:54 -- host/auth.sh@44 -- # keyid=3 00:30:25.884 11:17:54 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:25.884 11:17:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:25.884 11:17:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:25.884 11:17:54 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:25.884 11:17:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:30:25.884 11:17:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:25.884 11:17:54 -- host/auth.sh@68 -- # digest=sha384 00:30:25.884 11:17:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:25.884 11:17:54 -- host/auth.sh@68 -- # keyid=3 00:30:25.884 11:17:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:25.884 11:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.884 11:17:54 -- common/autotest_common.sh@10 -- # set +x 00:30:25.884 11:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.884 11:17:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:25.884 11:17:54 -- nvmf/common.sh@717 -- # local ip 00:30:25.884 11:17:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:25.884 11:17:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:25.884 11:17:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.884 11:17:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.884 11:17:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:25.884 11:17:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.884 11:17:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:25.884 11:17:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:25.884 11:17:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:25.884 11:17:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:25.884 11:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.884 11:17:54 -- common/autotest_common.sh@10 -- # set +x 00:30:26.449 nvme0n1 00:30:26.449 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.449 11:17:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.449 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.449 11:17:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:26.449 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:26.449 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.449 11:17:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.449 11:17:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.449 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.449 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:26.708 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.708 11:17:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:26.708 11:17:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:26.708 11:17:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:26.708 11:17:55 -- host/auth.sh@44 -- # digest=sha384 00:30:26.708 11:17:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:26.708 11:17:55 -- host/auth.sh@44 -- # keyid=4 00:30:26.708 11:17:55 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:26.708 11:17:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:30:26.708 11:17:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:26.708 11:17:55 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:26.708 11:17:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:30:26.708 11:17:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:26.708 11:17:55 -- host/auth.sh@68 -- # digest=sha384 00:30:26.708 11:17:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:26.708 11:17:55 -- host/auth.sh@68 -- # keyid=4 00:30:26.708 11:17:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:26.708 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.708 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:26.708 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:26.708 11:17:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:26.708 11:17:55 -- nvmf/common.sh@717 -- # local ip 00:30:26.708 11:17:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:26.708 11:17:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:26.708 11:17:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.708 11:17:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.708 11:17:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:26.708 11:17:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.708 11:17:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:26.708 11:17:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:26.708 11:17:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:26.708 11:17:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:26.708 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:26.708 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 nvme0n1 00:30:27.274 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.274 11:17:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.274 11:17:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.274 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.274 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.274 11:17:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.274 11:17:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.274 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.274 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.274 11:17:55 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:27.274 11:17:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:27.274 11:17:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.274 11:17:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:27.274 11:17:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.274 11:17:55 -- host/auth.sh@44 -- # digest=sha512 00:30:27.274 11:17:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.274 11:17:55 -- host/auth.sh@44 -- # keyid=0 00:30:27.274 11:17:55 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:27.274 11:17:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.274 11:17:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.274 11:17:55 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:27.274 11:17:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:30:27.274 11:17:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.274 11:17:55 -- host/auth.sh@68 -- # digest=sha512 00:30:27.274 11:17:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.274 11:17:55 -- host/auth.sh@68 -- # keyid=0 00:30:27.274 11:17:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.274 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.274 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.274 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.274 11:17:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.274 11:17:55 -- nvmf/common.sh@717 -- # local ip 00:30:27.274 11:17:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.274 11:17:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.274 11:17:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.274 11:17:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.274 11:17:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.274 11:17:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.274 11:17:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.274 11:17:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.274 11:17:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.274 11:17:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:27.274 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.274 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.590 nvme0n1 00:30:27.590 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.590 11:17:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.590 11:17:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.590 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.590 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.590 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.590 11:17:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.590 11:17:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.590 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.590 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.590 11:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.590 11:17:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.590 11:17:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:27.590 11:17:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.590 11:17:55 -- host/auth.sh@44 -- # digest=sha512 00:30:27.590 11:17:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.590 11:17:55 -- host/auth.sh@44 -- # keyid=1 00:30:27.590 11:17:55 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:27.590 11:17:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.590 11:17:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.590 11:17:55 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:27.590 11:17:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:30:27.590 11:17:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.590 11:17:55 -- host/auth.sh@68 -- # digest=sha512 00:30:27.590 11:17:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.590 11:17:55 -- host/auth.sh@68 -- # keyid=1 00:30:27.590 11:17:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.590 11:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.590 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:30:27.590 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.590 11:17:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.590 11:17:56 -- nvmf/common.sh@717 -- # local ip 00:30:27.590 11:17:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.590 11:17:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.590 11:17:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.590 11:17:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.590 11:17:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.590 11:17:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.590 11:17:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.590 11:17:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.590 11:17:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.590 11:17:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:27.590 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.590 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.590 nvme0n1 00:30:27.591 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.591 11:17:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.591 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.591 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.591 11:17:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.591 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.591 11:17:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.591 11:17:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.591 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.591 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.591 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.591 11:17:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.591 11:17:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:27.591 11:17:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.591 11:17:56 -- host/auth.sh@44 -- # digest=sha512 00:30:27.591 11:17:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.591 11:17:56 -- host/auth.sh@44 -- # keyid=2 00:30:27.591 11:17:56 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:27.591 11:17:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.591 11:17:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.591 11:17:56 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:27.591 11:17:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:30:27.591 11:17:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.591 11:17:56 -- host/auth.sh@68 -- # digest=sha512 00:30:27.591 11:17:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.591 11:17:56 -- host/auth.sh@68 -- # keyid=2 00:30:27.591 11:17:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.591 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.591 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.591 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.591 11:17:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.591 11:17:56 -- nvmf/common.sh@717 -- # local ip 00:30:27.591 11:17:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.591 11:17:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.591 11:17:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.591 11:17:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.591 11:17:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.591 11:17:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.591 11:17:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.591 11:17:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.591 11:17:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.591 11:17:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:27.591 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.591 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.850 nvme0n1 00:30:27.850 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.850 11:17:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.850 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.850 11:17:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.850 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.850 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.850 11:17:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.850 11:17:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.850 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.850 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.850 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.850 11:17:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:27.850 11:17:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:27.850 11:17:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:27.850 11:17:56 -- host/auth.sh@44 -- # digest=sha512 00:30:27.850 11:17:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:27.850 11:17:56 -- host/auth.sh@44 -- # keyid=3 00:30:27.850 11:17:56 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:27.850 11:17:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:27.850 11:17:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:27.850 11:17:56 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:27.850 11:17:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:30:27.850 11:17:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:27.850 11:17:56 -- host/auth.sh@68 -- # digest=sha512 00:30:27.850 11:17:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:27.850 11:17:56 -- host/auth.sh@68 -- # keyid=3 00:30:27.850 11:17:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:27.850 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.850 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.850 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.850 11:17:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:27.850 11:17:56 -- nvmf/common.sh@717 -- # local ip 00:30:27.850 11:17:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:27.850 11:17:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:27.850 11:17:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.850 11:17:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.850 11:17:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:27.850 11:17:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.850 11:17:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:27.850 11:17:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:27.850 11:17:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:27.850 11:17:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:27.850 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.850 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.850 nvme0n1 00:30:27.850 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.850 11:17:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.850 11:17:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:27.850 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:27.850 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:27.850 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.109 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.109 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.109 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.109 11:17:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:28.109 11:17:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.109 11:17:56 -- host/auth.sh@44 -- # digest=sha512 00:30:28.109 11:17:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:28.109 11:17:56 -- host/auth.sh@44 -- # keyid=4 00:30:28.109 11:17:56 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:28.109 11:17:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.109 11:17:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:28.109 11:17:56 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:28.109 11:17:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:30:28.109 11:17:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.109 11:17:56 -- host/auth.sh@68 -- # digest=sha512 00:30:28.109 11:17:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:28.109 11:17:56 -- host/auth.sh@68 -- # keyid=4 00:30:28.109 11:17:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:28.109 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.109 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.109 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.109 11:17:56 -- nvmf/common.sh@717 -- # local ip 00:30:28.109 11:17:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.109 11:17:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.109 11:17:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.109 11:17:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.109 11:17:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.109 11:17:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.109 11:17:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.109 11:17:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.109 11:17:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.109 11:17:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:28.109 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.109 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.109 nvme0n1 00:30:28.109 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.109 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.109 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.109 11:17:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.109 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.109 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.109 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.109 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:28.109 11:17:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.109 11:17:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:28.109 11:17:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.109 11:17:56 -- host/auth.sh@44 -- # digest=sha512 00:30:28.109 11:17:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.109 11:17:56 -- host/auth.sh@44 -- # keyid=0 00:30:28.109 11:17:56 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:28.109 11:17:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.109 11:17:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.109 11:17:56 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:28.109 11:17:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:30:28.109 11:17:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.109 11:17:56 -- host/auth.sh@68 -- # digest=sha512 00:30:28.109 11:17:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.109 11:17:56 -- host/auth.sh@68 -- # keyid=0 00:30:28.109 11:17:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.109 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.109 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.109 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.109 11:17:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.109 11:17:56 -- nvmf/common.sh@717 -- # local ip 00:30:28.109 11:17:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.109 11:17:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.109 11:17:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.109 11:17:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.109 11:17:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.109 11:17:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.109 11:17:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.109 11:17:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.109 11:17:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.109 11:17:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:28.109 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.109 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.368 nvme0n1 00:30:28.368 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.368 11:17:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.368 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.368 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.368 11:17:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.368 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.368 11:17:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.368 11:17:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.368 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.368 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.368 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.368 11:17:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.368 11:17:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:28.368 11:17:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.368 11:17:56 -- host/auth.sh@44 -- # digest=sha512 00:30:28.368 11:17:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.368 11:17:56 -- host/auth.sh@44 -- # keyid=1 00:30:28.368 11:17:56 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:28.368 11:17:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.368 11:17:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.368 11:17:56 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:28.368 11:17:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:30:28.368 11:17:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.368 11:17:56 -- host/auth.sh@68 -- # digest=sha512 00:30:28.368 11:17:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.368 11:17:56 -- host/auth.sh@68 -- # keyid=1 00:30:28.368 11:17:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.368 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.368 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.368 11:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.368 11:17:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.368 11:17:56 -- nvmf/common.sh@717 -- # local ip 00:30:28.368 11:17:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.368 11:17:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.368 11:17:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.368 11:17:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.368 11:17:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.368 11:17:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.368 11:17:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.368 11:17:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.368 11:17:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.368 11:17:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:28.368 11:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.368 11:17:56 -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 nvme0n1 00:30:28.628 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.628 11:17:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.628 11:17:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.628 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.628 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.628 11:17:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.628 11:17:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.628 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.628 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.628 11:17:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.628 11:17:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:28.628 11:17:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.628 11:17:57 -- host/auth.sh@44 -- # digest=sha512 00:30:28.628 11:17:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.628 11:17:57 -- host/auth.sh@44 -- # keyid=2 00:30:28.628 11:17:57 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:28.628 11:17:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.628 11:17:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.628 11:17:57 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:28.628 11:17:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:30:28.628 11:17:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.628 11:17:57 -- host/auth.sh@68 -- # digest=sha512 00:30:28.628 11:17:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.628 11:17:57 -- host/auth.sh@68 -- # keyid=2 00:30:28.628 11:17:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.628 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.628 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.628 11:17:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.628 11:17:57 -- nvmf/common.sh@717 -- # local ip 00:30:28.628 11:17:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.628 11:17:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.628 11:17:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.628 11:17:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.628 11:17:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.628 11:17:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.628 11:17:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.628 11:17:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.628 11:17:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.628 11:17:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:28.628 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.628 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 nvme0n1 00:30:28.628 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.628 11:17:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.628 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.628 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.628 11:17:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.628 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.887 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.887 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.887 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.887 11:17:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:28.887 11:17:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.887 11:17:57 -- host/auth.sh@44 -- # digest=sha512 00:30:28.887 11:17:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.887 11:17:57 -- host/auth.sh@44 -- # keyid=3 00:30:28.887 11:17:57 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:28.887 11:17:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.887 11:17:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.887 11:17:57 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:28.887 11:17:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:30:28.887 11:17:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.887 11:17:57 -- host/auth.sh@68 -- # digest=sha512 00:30:28.887 11:17:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.887 11:17:57 -- host/auth.sh@68 -- # keyid=3 00:30:28.887 11:17:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.887 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.887 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.887 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.887 11:17:57 -- nvmf/common.sh@717 -- # local ip 00:30:28.887 11:17:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.887 11:17:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.887 11:17:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.887 11:17:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.887 11:17:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:28.887 11:17:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.887 11:17:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:28.887 11:17:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:28.887 11:17:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:28.887 11:17:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:28.887 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.887 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.887 nvme0n1 00:30:28.887 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.887 11:17:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:28.887 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.887 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.887 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.887 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.887 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.887 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:28.887 11:17:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:28.887 11:17:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:28.887 11:17:57 -- host/auth.sh@44 -- # digest=sha512 00:30:28.887 11:17:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:28.887 11:17:57 -- host/auth.sh@44 -- # keyid=4 00:30:28.887 11:17:57 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:28.887 11:17:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:28.887 11:17:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:28.887 11:17:57 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:28.887 11:17:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:30:28.887 11:17:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:28.887 11:17:57 -- host/auth.sh@68 -- # digest=sha512 00:30:28.887 11:17:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:28.887 11:17:57 -- host/auth.sh@68 -- # keyid=4 00:30:28.887 11:17:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:28.887 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:28.887 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.887 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:28.887 11:17:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:28.887 11:17:57 -- nvmf/common.sh@717 -- # local ip 00:30:28.887 11:17:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:28.887 11:17:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:28.887 11:17:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.887 11:17:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.887 11:17:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.146 11:17:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.146 11:17:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.146 11:17:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.146 11:17:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.146 11:17:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:29.146 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.146 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.146 nvme0n1 00:30:29.146 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.146 11:17:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.146 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.146 11:17:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.146 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.146 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.146 11:17:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.146 11:17:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.146 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.146 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.146 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.146 11:17:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:29.146 11:17:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.146 11:17:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:29.146 11:17:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:29.146 11:17:57 -- host/auth.sh@44 -- # digest=sha512 00:30:29.146 11:17:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:29.146 11:17:57 -- host/auth.sh@44 -- # keyid=0 00:30:29.146 11:17:57 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:29.146 11:17:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:29.146 11:17:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:29.146 11:17:57 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:29.146 11:17:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:30:29.146 11:17:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:29.146 11:17:57 -- host/auth.sh@68 -- # digest=sha512 00:30:29.146 11:17:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:29.146 11:17:57 -- host/auth.sh@68 -- # keyid=0 00:30:29.146 11:17:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:29.146 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.146 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.146 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.146 11:17:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:29.146 11:17:57 -- nvmf/common.sh@717 -- # local ip 00:30:29.146 11:17:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:29.146 11:17:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:29.146 11:17:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.146 11:17:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.146 11:17:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.146 11:17:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.146 11:17:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.146 11:17:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.146 11:17:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.146 11:17:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:29.146 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.146 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.405 nvme0n1 00:30:29.405 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.405 11:17:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.405 11:17:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.405 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.405 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.405 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.405 11:17:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.405 11:17:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.405 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.405 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.405 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.405 11:17:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.405 11:17:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:29.405 11:17:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:29.405 11:17:57 -- host/auth.sh@44 -- # digest=sha512 00:30:29.405 11:17:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:29.405 11:17:57 -- host/auth.sh@44 -- # keyid=1 00:30:29.405 11:17:57 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:29.405 11:17:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:29.405 11:17:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:29.405 11:17:57 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:29.405 11:17:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:30:29.405 11:17:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:29.405 11:17:57 -- host/auth.sh@68 -- # digest=sha512 00:30:29.405 11:17:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:29.405 11:17:57 -- host/auth.sh@68 -- # keyid=1 00:30:29.405 11:17:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:29.405 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.405 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.405 11:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.405 11:17:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:29.405 11:17:57 -- nvmf/common.sh@717 -- # local ip 00:30:29.405 11:17:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:29.405 11:17:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:29.405 11:17:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.405 11:17:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.405 11:17:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.405 11:17:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.405 11:17:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.405 11:17:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.405 11:17:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.405 11:17:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:29.405 11:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.405 11:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.663 nvme0n1 00:30:29.663 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.663 11:17:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.663 11:17:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.663 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.663 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:29.663 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.663 11:17:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.663 11:17:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.663 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.663 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:29.663 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.663 11:17:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.663 11:17:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:29.663 11:17:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:29.663 11:17:58 -- host/auth.sh@44 -- # digest=sha512 00:30:29.663 11:17:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:29.663 11:17:58 -- host/auth.sh@44 -- # keyid=2 00:30:29.663 11:17:58 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:29.663 11:17:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:29.663 11:17:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:29.663 11:17:58 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:29.663 11:17:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:30:29.663 11:17:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:29.663 11:17:58 -- host/auth.sh@68 -- # digest=sha512 00:30:29.663 11:17:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:29.663 11:17:58 -- host/auth.sh@68 -- # keyid=2 00:30:29.663 11:17:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:29.663 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.663 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:29.663 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.663 11:17:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:29.663 11:17:58 -- nvmf/common.sh@717 -- # local ip 00:30:29.663 11:17:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:29.663 11:17:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:29.663 11:17:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.663 11:17:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.663 11:17:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.663 11:17:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.663 11:17:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.663 11:17:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.663 11:17:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.663 11:17:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:29.663 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.663 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:29.922 nvme0n1 00:30:29.922 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.922 11:17:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.922 11:17:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:29.922 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.922 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:29.922 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.922 11:17:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.922 11:17:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.922 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.922 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:29.922 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.922 11:17:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:29.922 11:17:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:29.922 11:17:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:29.922 11:17:58 -- host/auth.sh@44 -- # digest=sha512 00:30:29.922 11:17:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:29.922 11:17:58 -- host/auth.sh@44 -- # keyid=3 00:30:29.922 11:17:58 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:29.922 11:17:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:29.922 11:17:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:29.922 11:17:58 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:29.922 11:17:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:30:29.922 11:17:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:29.922 11:17:58 -- host/auth.sh@68 -- # digest=sha512 00:30:29.922 11:17:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:29.922 11:17:58 -- host/auth.sh@68 -- # keyid=3 00:30:29.922 11:17:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:29.922 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.922 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:29.922 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:29.922 11:17:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:29.922 11:17:58 -- nvmf/common.sh@717 -- # local ip 00:30:29.922 11:17:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:29.922 11:17:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:29.922 11:17:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.922 11:17:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.922 11:17:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:29.922 11:17:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.922 11:17:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:29.922 11:17:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:29.922 11:17:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:29.922 11:17:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:29.922 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.922 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:30.181 nvme0n1 00:30:30.181 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.181 11:17:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:30.181 11:17:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.181 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.181 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:30.181 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.181 11:17:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.181 11:17:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.181 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.181 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:30.181 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.181 11:17:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:30.181 11:17:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:30.181 11:17:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:30.181 11:17:58 -- host/auth.sh@44 -- # digest=sha512 00:30:30.181 11:17:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:30.181 11:17:58 -- host/auth.sh@44 -- # keyid=4 00:30:30.181 11:17:58 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:30.181 11:17:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:30.181 11:17:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:30.181 11:17:58 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:30.181 11:17:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:30:30.181 11:17:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:30.181 11:17:58 -- host/auth.sh@68 -- # digest=sha512 00:30:30.181 11:17:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:30.181 11:17:58 -- host/auth.sh@68 -- # keyid=4 00:30:30.181 11:17:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:30.181 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.181 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:30.181 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.181 11:17:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:30.181 11:17:58 -- nvmf/common.sh@717 -- # local ip 00:30:30.181 11:17:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:30.181 11:17:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:30.181 11:17:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.181 11:17:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.181 11:17:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:30.181 11:17:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.181 11:17:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:30.181 11:17:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:30.181 11:17:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:30.181 11:17:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:30.181 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.181 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:30.479 nvme0n1 00:30:30.479 11:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.479 11:17:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.479 11:17:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:30.479 11:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.479 11:17:58 -- common/autotest_common.sh@10 -- # set +x 00:30:30.479 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.479 11:17:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.479 11:17:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.479 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.479 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:30.479 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.479 11:17:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:30.479 11:17:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:30.479 11:17:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:30.479 11:17:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:30.479 11:17:59 -- host/auth.sh@44 -- # digest=sha512 00:30:30.479 11:17:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:30.479 11:17:59 -- host/auth.sh@44 -- # keyid=0 00:30:30.479 11:17:59 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:30.479 11:17:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:30.479 11:17:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:30.479 11:17:59 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:30.479 11:17:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:30:30.479 11:17:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:30.479 11:17:59 -- host/auth.sh@68 -- # digest=sha512 00:30:30.479 11:17:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:30.479 11:17:59 -- host/auth.sh@68 -- # keyid=0 00:30:30.479 11:17:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:30.479 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.479 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:30.479 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.479 11:17:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:30.479 11:17:59 -- nvmf/common.sh@717 -- # local ip 00:30:30.479 11:17:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:30.479 11:17:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:30.479 11:17:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.479 11:17:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.479 11:17:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:30.479 11:17:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.479 11:17:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:30.479 11:17:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:30.479 11:17:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:30.479 11:17:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:30.479 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.479 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.045 nvme0n1 00:30:31.045 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.045 11:17:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.045 11:17:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:31.045 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.045 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.045 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.045 11:17:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.045 11:17:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.045 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.045 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.045 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.045 11:17:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:31.045 11:17:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:31.045 11:17:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:31.045 11:17:59 -- host/auth.sh@44 -- # digest=sha512 00:30:31.045 11:17:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:31.045 11:17:59 -- host/auth.sh@44 -- # keyid=1 00:30:31.045 11:17:59 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:31.045 11:17:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:31.045 11:17:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:31.045 11:17:59 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:31.045 11:17:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:30:31.045 11:17:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:31.045 11:17:59 -- host/auth.sh@68 -- # digest=sha512 00:30:31.045 11:17:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:31.045 11:17:59 -- host/auth.sh@68 -- # keyid=1 00:30:31.045 11:17:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.045 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.045 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.045 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.045 11:17:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:31.045 11:17:59 -- nvmf/common.sh@717 -- # local ip 00:30:31.045 11:17:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:31.045 11:17:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:31.045 11:17:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.045 11:17:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.045 11:17:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:31.045 11:17:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.045 11:17:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:31.045 11:17:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:31.045 11:17:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:31.045 11:17:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:31.045 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.045 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.303 nvme0n1 00:30:31.303 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.303 11:17:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.303 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.303 11:17:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:31.303 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.303 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.561 11:17:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.561 11:17:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.561 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.561 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.561 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.561 11:17:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:31.561 11:17:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:31.561 11:17:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:31.561 11:17:59 -- host/auth.sh@44 -- # digest=sha512 00:30:31.561 11:17:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:31.561 11:17:59 -- host/auth.sh@44 -- # keyid=2 00:30:31.561 11:17:59 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:31.561 11:17:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:31.561 11:17:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:31.561 11:17:59 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:31.561 11:17:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:30:31.561 11:17:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:31.561 11:17:59 -- host/auth.sh@68 -- # digest=sha512 00:30:31.561 11:17:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:31.561 11:17:59 -- host/auth.sh@68 -- # keyid=2 00:30:31.561 11:17:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.561 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.561 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.561 11:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.561 11:17:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:31.561 11:17:59 -- nvmf/common.sh@717 -- # local ip 00:30:31.561 11:17:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:31.561 11:17:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:31.561 11:17:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.561 11:17:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.561 11:17:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:31.561 11:17:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.561 11:17:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:31.561 11:17:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:31.561 11:17:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:31.561 11:17:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:31.561 11:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.561 11:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:31.820 nvme0n1 00:30:31.820 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.820 11:18:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.820 11:18:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:31.820 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.820 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:31.820 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.820 11:18:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.820 11:18:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.820 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.820 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:31.820 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.820 11:18:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:31.820 11:18:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:31.820 11:18:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:31.820 11:18:00 -- host/auth.sh@44 -- # digest=sha512 00:30:31.820 11:18:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:31.820 11:18:00 -- host/auth.sh@44 -- # keyid=3 00:30:31.820 11:18:00 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:31.820 11:18:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:31.820 11:18:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:31.820 11:18:00 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:31.820 11:18:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:30:31.820 11:18:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:31.820 11:18:00 -- host/auth.sh@68 -- # digest=sha512 00:30:31.820 11:18:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:31.820 11:18:00 -- host/auth.sh@68 -- # keyid=3 00:30:31.820 11:18:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.820 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.820 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:31.820 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.820 11:18:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:31.820 11:18:00 -- nvmf/common.sh@717 -- # local ip 00:30:31.820 11:18:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:31.820 11:18:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:31.820 11:18:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.820 11:18:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.820 11:18:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:31.820 11:18:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.820 11:18:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:31.820 11:18:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:31.820 11:18:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:31.820 11:18:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:31.820 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.820 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:32.386 nvme0n1 00:30:32.386 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.386 11:18:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.386 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.386 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:32.386 11:18:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:32.386 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.386 11:18:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.386 11:18:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.386 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.386 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:32.386 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.386 11:18:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:32.386 11:18:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:32.386 11:18:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:32.386 11:18:00 -- host/auth.sh@44 -- # digest=sha512 00:30:32.386 11:18:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:32.386 11:18:00 -- host/auth.sh@44 -- # keyid=4 00:30:32.386 11:18:00 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:32.386 11:18:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:32.386 11:18:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:32.386 11:18:00 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:32.386 11:18:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:30:32.386 11:18:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:32.386 11:18:00 -- host/auth.sh@68 -- # digest=sha512 00:30:32.386 11:18:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:32.386 11:18:00 -- host/auth.sh@68 -- # keyid=4 00:30:32.386 11:18:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:32.386 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.386 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:32.386 11:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.386 11:18:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:32.386 11:18:00 -- nvmf/common.sh@717 -- # local ip 00:30:32.386 11:18:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:32.386 11:18:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:32.386 11:18:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.386 11:18:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.386 11:18:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:32.386 11:18:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.386 11:18:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:32.386 11:18:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:32.386 11:18:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:32.386 11:18:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:32.386 11:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.386 11:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:32.644 nvme0n1 00:30:32.644 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.644 11:18:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.644 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.644 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:32.644 11:18:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:32.644 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.644 11:18:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.644 11:18:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.644 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.644 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:32.644 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.644 11:18:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:32.644 11:18:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:32.644 11:18:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:32.644 11:18:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:32.644 11:18:01 -- host/auth.sh@44 -- # digest=sha512 00:30:32.644 11:18:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:32.644 11:18:01 -- host/auth.sh@44 -- # keyid=0 00:30:32.644 11:18:01 -- host/auth.sh@45 -- # key=DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:32.644 11:18:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:32.644 11:18:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:32.644 11:18:01 -- host/auth.sh@49 -- # echo DHHC-1:00:OTVlODBmODIxMDU4ODZjMDVjMTNlNWNjOGUzZGU2ZDmjEzcn: 00:30:32.644 11:18:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:30:32.644 11:18:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:32.644 11:18:01 -- host/auth.sh@68 -- # digest=sha512 00:30:32.644 11:18:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:32.644 11:18:01 -- host/auth.sh@68 -- # keyid=0 00:30:32.644 11:18:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:32.644 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.644 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:32.644 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:32.644 11:18:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:32.644 11:18:01 -- nvmf/common.sh@717 -- # local ip 00:30:32.644 11:18:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:32.644 11:18:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:32.644 11:18:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.645 11:18:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.645 11:18:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:32.645 11:18:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.645 11:18:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:32.645 11:18:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:32.645 11:18:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:32.645 11:18:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:32.645 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:32.645 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:33.211 nvme0n1 00:30:33.211 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.211 11:18:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.211 11:18:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:33.211 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.211 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:33.211 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.469 11:18:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.469 11:18:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.469 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.469 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:33.469 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.469 11:18:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:33.469 11:18:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:33.469 11:18:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:33.469 11:18:01 -- host/auth.sh@44 -- # digest=sha512 00:30:33.469 11:18:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:33.469 11:18:01 -- host/auth.sh@44 -- # keyid=1 00:30:33.469 11:18:01 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:33.469 11:18:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:33.469 11:18:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:33.469 11:18:01 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:33.469 11:18:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:30:33.469 11:18:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:33.469 11:18:01 -- host/auth.sh@68 -- # digest=sha512 00:30:33.469 11:18:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:33.469 11:18:01 -- host/auth.sh@68 -- # keyid=1 00:30:33.469 11:18:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:33.469 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.469 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:33.469 11:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:33.469 11:18:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:33.469 11:18:01 -- nvmf/common.sh@717 -- # local ip 00:30:33.469 11:18:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:33.469 11:18:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:33.469 11:18:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.469 11:18:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.469 11:18:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:33.469 11:18:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.469 11:18:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:33.469 11:18:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:33.469 11:18:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:33.469 11:18:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:33.469 11:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:33.469 11:18:01 -- common/autotest_common.sh@10 -- # set +x 00:30:34.035 nvme0n1 00:30:34.035 11:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.035 11:18:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.035 11:18:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:34.035 11:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.035 11:18:02 -- common/autotest_common.sh@10 -- # set +x 00:30:34.035 11:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.035 11:18:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.035 11:18:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.035 11:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.035 11:18:02 -- common/autotest_common.sh@10 -- # set +x 00:30:34.035 11:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.035 11:18:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:34.035 11:18:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:34.035 11:18:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:34.035 11:18:02 -- host/auth.sh@44 -- # digest=sha512 00:30:34.035 11:18:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:34.035 11:18:02 -- host/auth.sh@44 -- # keyid=2 00:30:34.035 11:18:02 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:34.035 11:18:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:34.035 11:18:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:34.035 11:18:02 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDI5OTE3ZTljMDRjZGU5YzdiZDI0YzBiN2UyYjNlMWLSu0W8: 00:30:34.035 11:18:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:30:34.035 11:18:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:34.035 11:18:02 -- host/auth.sh@68 -- # digest=sha512 00:30:34.035 11:18:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:34.035 11:18:02 -- host/auth.sh@68 -- # keyid=2 00:30:34.035 11:18:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:34.035 11:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.035 11:18:02 -- common/autotest_common.sh@10 -- # set +x 00:30:34.035 11:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.035 11:18:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:34.035 11:18:02 -- nvmf/common.sh@717 -- # local ip 00:30:34.035 11:18:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:34.035 11:18:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:34.035 11:18:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.035 11:18:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.035 11:18:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:34.035 11:18:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.035 11:18:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:34.035 11:18:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:34.035 11:18:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:34.035 11:18:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:34.035 11:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.035 11:18:02 -- common/autotest_common.sh@10 -- # set +x 00:30:34.602 nvme0n1 00:30:34.602 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.602 11:18:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:34.602 11:18:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.602 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.602 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:34.602 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.602 11:18:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.602 11:18:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.602 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.602 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:34.602 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.602 11:18:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:34.602 11:18:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:34.602 11:18:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:34.602 11:18:03 -- host/auth.sh@44 -- # digest=sha512 00:30:34.602 11:18:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:34.602 11:18:03 -- host/auth.sh@44 -- # keyid=3 00:30:34.602 11:18:03 -- host/auth.sh@45 -- # key=DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:34.602 11:18:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:34.602 11:18:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:34.602 11:18:03 -- host/auth.sh@49 -- # echo DHHC-1:02:MDQ4OTE4NTA4MTkzY2VjZTExNWI5MTNkYzM1YzM2MjgyYmMwYzEzYzljNTllYTBmzAvYsg==: 00:30:34.602 11:18:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:30:34.602 11:18:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:34.602 11:18:03 -- host/auth.sh@68 -- # digest=sha512 00:30:34.602 11:18:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:34.602 11:18:03 -- host/auth.sh@68 -- # keyid=3 00:30:34.602 11:18:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:34.602 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.602 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:34.602 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:34.602 11:18:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:34.602 11:18:03 -- nvmf/common.sh@717 -- # local ip 00:30:34.602 11:18:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:34.602 11:18:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:34.602 11:18:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.602 11:18:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.602 11:18:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:34.602 11:18:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.602 11:18:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:34.602 11:18:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:34.602 11:18:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:34.602 11:18:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:34.602 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:34.602 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:35.168 nvme0n1 00:30:35.168 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.168 11:18:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.168 11:18:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:35.168 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.168 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:35.168 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.427 11:18:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.427 11:18:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:35.427 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.427 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:35.427 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.427 11:18:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:35.427 11:18:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:35.427 11:18:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:35.427 11:18:03 -- host/auth.sh@44 -- # digest=sha512 00:30:35.427 11:18:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:35.427 11:18:03 -- host/auth.sh@44 -- # keyid=4 00:30:35.427 11:18:03 -- host/auth.sh@45 -- # key=DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:35.427 11:18:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:30:35.427 11:18:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:35.427 11:18:03 -- host/auth.sh@49 -- # echo DHHC-1:03:OTQ2YWM0ZTBkYTY2NjZlYTIyZDNmMjI5NmMzMDI3OWVmZTc3YzhjYTU3NjMxNTdmZGQzYzA0MWM5M2ZiZTVmYvDmV+s=: 00:30:35.427 11:18:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:30:35.427 11:18:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:35.427 11:18:03 -- host/auth.sh@68 -- # digest=sha512 00:30:35.427 11:18:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:35.427 11:18:03 -- host/auth.sh@68 -- # keyid=4 00:30:35.427 11:18:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:35.427 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.427 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:35.427 11:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.427 11:18:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:35.427 11:18:03 -- nvmf/common.sh@717 -- # local ip 00:30:35.427 11:18:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:35.427 11:18:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:35.427 11:18:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.427 11:18:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.427 11:18:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:35.427 11:18:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.427 11:18:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:35.427 11:18:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:35.427 11:18:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:35.427 11:18:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:35.427 11:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.427 11:18:03 -- common/autotest_common.sh@10 -- # set +x 00:30:35.994 nvme0n1 00:30:35.994 11:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.994 11:18:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.994 11:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.994 11:18:04 -- common/autotest_common.sh@10 -- # set +x 00:30:35.994 11:18:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:35.994 11:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.994 11:18:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:35.994 11:18:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:35.994 11:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.994 11:18:04 -- common/autotest_common.sh@10 -- # set +x 00:30:35.994 11:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.994 11:18:04 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:35.994 11:18:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:35.994 11:18:04 -- host/auth.sh@44 -- # digest=sha256 00:30:35.994 11:18:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:35.994 11:18:04 -- host/auth.sh@44 -- # keyid=1 00:30:35.994 11:18:04 -- host/auth.sh@45 -- # key=DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:35.994 11:18:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:35.994 11:18:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:35.994 11:18:04 -- host/auth.sh@49 -- # echo DHHC-1:00:OWZjZWQ4NmMyZGZmZDMxOTUxNDVkMGExNDM0OTNlOTQ5ZjA1NjViZmJkY2MyNGJhhsexFQ==: 00:30:35.994 11:18:04 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:35.994 11:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.994 11:18:04 -- common/autotest_common.sh@10 -- # set +x 00:30:35.994 11:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.994 11:18:04 -- host/auth.sh@119 -- # get_main_ns_ip 00:30:35.994 11:18:04 -- nvmf/common.sh@717 -- # local ip 00:30:35.994 11:18:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:35.994 11:18:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:35.994 11:18:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.994 11:18:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.994 11:18:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:35.994 11:18:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.994 11:18:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:35.994 11:18:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:35.994 11:18:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:35.994 11:18:04 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:35.994 11:18:04 -- common/autotest_common.sh@638 -- # local es=0 00:30:35.994 11:18:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:35.994 11:18:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:35.994 11:18:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:35.994 11:18:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:35.994 11:18:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:35.994 11:18:04 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:35.994 11:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.994 11:18:04 -- common/autotest_common.sh@10 -- # set +x 00:30:35.994 2024/04/18 11:18:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:30:35.994 request: 00:30:35.994 { 00:30:35.994 "method": "bdev_nvme_attach_controller", 00:30:35.994 "params": { 00:30:35.994 "name": "nvme0", 00:30:35.994 "trtype": "tcp", 00:30:35.994 "traddr": "10.0.0.1", 00:30:35.994 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:35.994 "adrfam": "ipv4", 00:30:35.994 "trsvcid": "4420", 00:30:35.994 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:30:35.994 } 00:30:35.994 } 00:30:35.994 Got JSON-RPC error response 00:30:35.994 GoRPCClient: error on JSON-RPC call 00:30:35.994 11:18:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:35.994 11:18:04 -- common/autotest_common.sh@641 -- # es=1 00:30:35.994 11:18:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:35.994 11:18:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:35.994 11:18:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:35.994 11:18:04 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.994 11:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.994 11:18:04 -- common/autotest_common.sh@10 -- # set +x 00:30:35.995 11:18:04 -- host/auth.sh@121 -- # jq length 00:30:35.995 11:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.995 11:18:04 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:30:35.995 11:18:04 -- host/auth.sh@124 -- # get_main_ns_ip 00:30:35.995 11:18:04 -- nvmf/common.sh@717 -- # local ip 00:30:35.995 11:18:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:35.995 11:18:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:35.995 11:18:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:35.995 11:18:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:35.995 11:18:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:35.995 11:18:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:35.995 11:18:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:35.995 11:18:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:35.995 11:18:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:35.995 11:18:04 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:35.995 11:18:04 -- common/autotest_common.sh@638 -- # local es=0 00:30:35.995 11:18:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:35.995 11:18:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:35.995 11:18:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:35.995 11:18:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:35.995 11:18:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:35.995 11:18:04 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:35.995 11:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.995 11:18:04 -- common/autotest_common.sh@10 -- # set +x 00:30:35.995 2024/04/18 11:18:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:30:35.995 request: 00:30:35.995 { 00:30:35.995 "method": "bdev_nvme_attach_controller", 00:30:35.995 "params": { 00:30:36.253 "name": "nvme0", 00:30:36.253 "trtype": "tcp", 00:30:36.253 "traddr": "10.0.0.1", 00:30:36.253 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:36.253 "adrfam": "ipv4", 00:30:36.253 "trsvcid": "4420", 00:30:36.253 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:36.253 "dhchap_key": "key2" 00:30:36.253 } 00:30:36.253 } 00:30:36.253 Got JSON-RPC error response 00:30:36.253 GoRPCClient: error on JSON-RPC call 00:30:36.253 11:18:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:36.253 11:18:04 -- common/autotest_common.sh@641 -- # es=1 00:30:36.253 11:18:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:36.253 11:18:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:36.253 11:18:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:36.253 11:18:04 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.253 11:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:36.253 11:18:04 -- common/autotest_common.sh@10 -- # set +x 00:30:36.253 11:18:04 -- host/auth.sh@127 -- # jq length 00:30:36.253 11:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:36.253 11:18:04 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:30:36.253 11:18:04 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:36.253 11:18:04 -- host/auth.sh@130 -- # cleanup 00:30:36.253 11:18:04 -- host/auth.sh@24 -- # nvmftestfini 00:30:36.253 11:18:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:36.253 11:18:04 -- nvmf/common.sh@117 -- # sync 00:30:36.253 11:18:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:36.253 11:18:04 -- nvmf/common.sh@120 -- # set +e 00:30:36.253 11:18:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:36.253 11:18:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:36.253 rmmod nvme_tcp 00:30:36.253 rmmod nvme_fabrics 00:30:36.253 11:18:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:36.253 11:18:04 -- nvmf/common.sh@124 -- # set -e 00:30:36.253 11:18:04 -- nvmf/common.sh@125 -- # return 0 00:30:36.253 11:18:04 -- nvmf/common.sh@478 -- # '[' -n 102867 ']' 00:30:36.253 11:18:04 -- nvmf/common.sh@479 -- # killprocess 102867 00:30:36.253 11:18:04 -- common/autotest_common.sh@936 -- # '[' -z 102867 ']' 00:30:36.253 11:18:04 -- common/autotest_common.sh@940 -- # kill -0 102867 00:30:36.253 11:18:04 -- common/autotest_common.sh@941 -- # uname 00:30:36.253 11:18:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:36.253 11:18:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102867 00:30:36.253 11:18:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:36.253 11:18:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:36.253 11:18:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102867' 00:30:36.253 killing process with pid 102867 00:30:36.253 11:18:04 -- common/autotest_common.sh@955 -- # kill 102867 00:30:36.253 11:18:04 -- common/autotest_common.sh@960 -- # wait 102867 00:30:36.511 11:18:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:36.511 11:18:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:36.511 11:18:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:36.511 11:18:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:36.511 11:18:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:36.511 11:18:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.511 11:18:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.511 11:18:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.511 11:18:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:36.511 11:18:04 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:36.511 11:18:04 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:36.511 11:18:04 -- host/auth.sh@27 -- # clean_kernel_target 00:30:36.511 11:18:04 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:36.511 11:18:04 -- nvmf/common.sh@675 -- # echo 0 00:30:36.511 11:18:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:36.511 11:18:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:36.511 11:18:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:36.512 11:18:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:36.512 11:18:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:30:36.512 11:18:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:30:36.512 11:18:05 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:37.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:37.351 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:37.351 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:37.351 11:18:05 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.p5A /tmp/spdk.key-null.OxF /tmp/spdk.key-sha256.tv7 /tmp/spdk.key-sha384.q6e /tmp/spdk.key-sha512.gfr /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:30:37.351 11:18:05 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:37.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:37.865 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:37.865 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:37.865 00:30:37.865 real 0m38.797s 00:30:37.865 user 0m35.010s 00:30:37.865 sys 0m3.623s 00:30:37.865 11:18:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:37.865 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:30:37.865 ************************************ 00:30:37.865 END TEST nvmf_auth 00:30:37.865 ************************************ 00:30:37.865 11:18:06 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:30:37.865 11:18:06 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:37.865 11:18:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:37.865 11:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:37.865 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:30:37.865 ************************************ 00:30:37.865 START TEST nvmf_digest 00:30:37.865 ************************************ 00:30:37.865 11:18:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:37.865 * Looking for test storage... 00:30:37.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:37.865 11:18:06 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:37.865 11:18:06 -- nvmf/common.sh@7 -- # uname -s 00:30:37.865 11:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.865 11:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.122 11:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.122 11:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.122 11:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.122 11:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.122 11:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.122 11:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.122 11:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.122 11:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.122 11:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:30:38.122 11:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:30:38.122 11:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.122 11:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.122 11:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:38.122 11:18:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.122 11:18:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:38.122 11:18:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.122 11:18:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.122 11:18:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.122 11:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.122 11:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.122 11:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.122 11:18:06 -- paths/export.sh@5 -- # export PATH 00:30:38.122 11:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.122 11:18:06 -- nvmf/common.sh@47 -- # : 0 00:30:38.122 11:18:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.122 11:18:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.122 11:18:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.122 11:18:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.122 11:18:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.122 11:18:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.122 11:18:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.122 11:18:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.122 11:18:06 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:38.122 11:18:06 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:38.122 11:18:06 -- host/digest.sh@16 -- # runtime=2 00:30:38.122 11:18:06 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:38.122 11:18:06 -- host/digest.sh@138 -- # nvmftestinit 00:30:38.122 11:18:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:38.122 11:18:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.122 11:18:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:38.122 11:18:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:38.122 11:18:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:38.122 11:18:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.122 11:18:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.122 11:18:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.122 11:18:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:38.122 11:18:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:38.122 11:18:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:38.122 11:18:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:38.122 11:18:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:38.122 11:18:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:38.122 11:18:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.122 11:18:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.122 11:18:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:38.122 11:18:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:38.122 11:18:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:38.122 11:18:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:38.122 11:18:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:38.122 11:18:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.122 11:18:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:38.122 11:18:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:38.122 11:18:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:38.122 11:18:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:38.122 11:18:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:38.122 11:18:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:38.122 Cannot find device "nvmf_tgt_br" 00:30:38.122 11:18:06 -- nvmf/common.sh@155 -- # true 00:30:38.122 11:18:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:38.122 Cannot find device "nvmf_tgt_br2" 00:30:38.122 11:18:06 -- nvmf/common.sh@156 -- # true 00:30:38.122 11:18:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:38.122 11:18:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:38.122 Cannot find device "nvmf_tgt_br" 00:30:38.123 11:18:06 -- nvmf/common.sh@158 -- # true 00:30:38.123 11:18:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:38.123 Cannot find device "nvmf_tgt_br2" 00:30:38.123 11:18:06 -- nvmf/common.sh@159 -- # true 00:30:38.123 11:18:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:38.123 11:18:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:38.123 11:18:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:38.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:38.123 11:18:06 -- nvmf/common.sh@162 -- # true 00:30:38.123 11:18:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:38.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:38.123 11:18:06 -- nvmf/common.sh@163 -- # true 00:30:38.123 11:18:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:38.123 11:18:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:38.123 11:18:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:38.123 11:18:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:38.123 11:18:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:38.123 11:18:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:38.123 11:18:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:38.123 11:18:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:38.123 11:18:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:38.123 11:18:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:38.123 11:18:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:38.123 11:18:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:38.123 11:18:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:38.380 11:18:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:38.380 11:18:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:38.380 11:18:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:38.380 11:18:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:38.380 11:18:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:38.380 11:18:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:38.380 11:18:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:38.380 11:18:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:38.380 11:18:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:38.380 11:18:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:38.380 11:18:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:38.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:30:38.380 00:30:38.380 --- 10.0.0.2 ping statistics --- 00:30:38.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.380 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:30:38.380 11:18:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:38.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:38.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:30:38.380 00:30:38.380 --- 10.0.0.3 ping statistics --- 00:30:38.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.380 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:30:38.380 11:18:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:38.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:30:38.380 00:30:38.380 --- 10.0.0.1 ping statistics --- 00:30:38.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.380 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:30:38.380 11:18:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.380 11:18:06 -- nvmf/common.sh@422 -- # return 0 00:30:38.380 11:18:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:38.380 11:18:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.380 11:18:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:38.380 11:18:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:38.380 11:18:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.380 11:18:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:38.380 11:18:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:38.380 11:18:06 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:38.380 11:18:06 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:38.380 11:18:06 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:38.380 11:18:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:38.380 11:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.380 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:30:38.380 ************************************ 00:30:38.380 START TEST nvmf_digest_clean 00:30:38.380 ************************************ 00:30:38.380 11:18:06 -- common/autotest_common.sh@1111 -- # run_digest 00:30:38.380 11:18:06 -- host/digest.sh@120 -- # local dsa_initiator 00:30:38.380 11:18:06 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:38.380 11:18:06 -- host/digest.sh@121 -- # dsa_initiator=false 00:30:38.380 11:18:06 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:38.380 11:18:06 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:38.380 11:18:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:38.380 11:18:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:38.380 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:30:38.380 11:18:06 -- nvmf/common.sh@470 -- # nvmfpid=104485 00:30:38.380 11:18:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:38.380 11:18:06 -- nvmf/common.sh@471 -- # waitforlisten 104485 00:30:38.380 11:18:06 -- common/autotest_common.sh@817 -- # '[' -z 104485 ']' 00:30:38.380 11:18:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.380 11:18:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:38.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.380 11:18:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.380 11:18:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:38.380 11:18:06 -- common/autotest_common.sh@10 -- # set +x 00:30:38.380 [2024-04-18 11:18:07.012796] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:38.380 [2024-04-18 11:18:07.012859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.638 [2024-04-18 11:18:07.150219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.638 [2024-04-18 11:18:07.238912] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.638 [2024-04-18 11:18:07.238985] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.638 [2024-04-18 11:18:07.239007] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.638 [2024-04-18 11:18:07.239022] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.638 [2024-04-18 11:18:07.239079] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.638 [2024-04-18 11:18:07.239138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.572 11:18:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:39.572 11:18:08 -- common/autotest_common.sh@850 -- # return 0 00:30:39.572 11:18:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:39.572 11:18:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:39.572 11:18:08 -- common/autotest_common.sh@10 -- # set +x 00:30:39.572 11:18:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.572 11:18:08 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:39.572 11:18:08 -- host/digest.sh@126 -- # common_target_config 00:30:39.572 11:18:08 -- host/digest.sh@43 -- # rpc_cmd 00:30:39.572 11:18:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:39.572 11:18:08 -- common/autotest_common.sh@10 -- # set +x 00:30:39.572 null0 00:30:39.572 [2024-04-18 11:18:08.202141] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.830 [2024-04-18 11:18:08.226323] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.830 11:18:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:39.830 11:18:08 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:39.830 11:18:08 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:39.830 11:18:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:39.830 11:18:08 -- host/digest.sh@80 -- # rw=randread 00:30:39.830 11:18:08 -- host/digest.sh@80 -- # bs=4096 00:30:39.830 11:18:08 -- host/digest.sh@80 -- # qd=128 00:30:39.830 11:18:08 -- host/digest.sh@80 -- # scan_dsa=false 00:30:39.830 11:18:08 -- host/digest.sh@83 -- # bperfpid=104535 00:30:39.830 11:18:08 -- host/digest.sh@84 -- # waitforlisten 104535 /var/tmp/bperf.sock 00:30:39.830 11:18:08 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:39.830 11:18:08 -- common/autotest_common.sh@817 -- # '[' -z 104535 ']' 00:30:39.830 11:18:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:39.830 11:18:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:39.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:39.830 11:18:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:39.830 11:18:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:39.830 11:18:08 -- common/autotest_common.sh@10 -- # set +x 00:30:39.830 [2024-04-18 11:18:08.290376] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:39.830 [2024-04-18 11:18:08.290470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104535 ] 00:30:39.830 [2024-04-18 11:18:08.436036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.088 [2024-04-18 11:18:08.539995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.022 11:18:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:41.022 11:18:09 -- common/autotest_common.sh@850 -- # return 0 00:30:41.022 11:18:09 -- host/digest.sh@86 -- # false 00:30:41.022 11:18:09 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:41.022 11:18:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:41.279 11:18:09 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.279 11:18:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.536 nvme0n1 00:30:41.536 11:18:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:41.536 11:18:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:41.536 Running I/O for 2 seconds... 00:30:44.062 00:30:44.062 Latency(us) 00:30:44.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.062 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:44.062 nvme0n1 : 2.00 18643.03 72.82 0.00 0.00 6857.20 3738.53 12928.47 00:30:44.062 =================================================================================================================== 00:30:44.062 Total : 18643.03 72.82 0.00 0.00 6857.20 3738.53 12928.47 00:30:44.062 0 00:30:44.062 11:18:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:44.062 11:18:12 -- host/digest.sh@93 -- # get_accel_stats 00:30:44.062 11:18:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:44.062 | select(.opcode=="crc32c") 00:30:44.062 | "\(.module_name) \(.executed)"' 00:30:44.062 11:18:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:44.062 11:18:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:44.062 11:18:12 -- host/digest.sh@94 -- # false 00:30:44.062 11:18:12 -- host/digest.sh@94 -- # exp_module=software 00:30:44.062 11:18:12 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:44.062 11:18:12 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:44.062 11:18:12 -- host/digest.sh@98 -- # killprocess 104535 00:30:44.062 11:18:12 -- common/autotest_common.sh@936 -- # '[' -z 104535 ']' 00:30:44.062 11:18:12 -- common/autotest_common.sh@940 -- # kill -0 104535 00:30:44.062 11:18:12 -- common/autotest_common.sh@941 -- # uname 00:30:44.062 11:18:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:44.062 11:18:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104535 00:30:44.062 11:18:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:44.063 11:18:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:44.063 killing process with pid 104535 00:30:44.063 11:18:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104535' 00:30:44.063 11:18:12 -- common/autotest_common.sh@955 -- # kill 104535 00:30:44.063 Received shutdown signal, test time was about 2.000000 seconds 00:30:44.063 00:30:44.063 Latency(us) 00:30:44.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.063 =================================================================================================================== 00:30:44.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.063 11:18:12 -- common/autotest_common.sh@960 -- # wait 104535 00:30:44.063 11:18:12 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:44.063 11:18:12 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:44.063 11:18:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:44.063 11:18:12 -- host/digest.sh@80 -- # rw=randread 00:30:44.063 11:18:12 -- host/digest.sh@80 -- # bs=131072 00:30:44.063 11:18:12 -- host/digest.sh@80 -- # qd=16 00:30:44.063 11:18:12 -- host/digest.sh@80 -- # scan_dsa=false 00:30:44.063 11:18:12 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:44.063 11:18:12 -- host/digest.sh@83 -- # bperfpid=104624 00:30:44.063 11:18:12 -- host/digest.sh@84 -- # waitforlisten 104624 /var/tmp/bperf.sock 00:30:44.063 11:18:12 -- common/autotest_common.sh@817 -- # '[' -z 104624 ']' 00:30:44.063 11:18:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.063 11:18:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:44.063 11:18:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.063 11:18:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:44.063 11:18:12 -- common/autotest_common.sh@10 -- # set +x 00:30:44.063 [2024-04-18 11:18:12.674662] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:44.063 [2024-04-18 11:18:12.675469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104624 ] 00:30:44.063 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:44.063 Zero copy mechanism will not be used. 00:30:44.320 [2024-04-18 11:18:12.809063] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.320 [2024-04-18 11:18:12.894364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.252 11:18:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:45.253 11:18:13 -- common/autotest_common.sh@850 -- # return 0 00:30:45.253 11:18:13 -- host/digest.sh@86 -- # false 00:30:45.253 11:18:13 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:45.253 11:18:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:45.510 11:18:14 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.510 11:18:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.768 nvme0n1 00:30:45.768 11:18:14 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:45.768 11:18:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:46.026 Zero copy mechanism will not be used. 00:30:46.026 Running I/O for 2 seconds... 00:30:47.925 00:30:47.925 Latency(us) 00:30:47.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.925 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:47.925 nvme0n1 : 2.00 8154.35 1019.29 0.00 0.00 1958.56 573.44 11617.75 00:30:47.925 =================================================================================================================== 00:30:47.925 Total : 8154.35 1019.29 0.00 0.00 1958.56 573.44 11617.75 00:30:47.925 0 00:30:47.925 11:18:16 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:47.925 11:18:16 -- host/digest.sh@93 -- # get_accel_stats 00:30:47.925 11:18:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:47.925 11:18:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:47.925 11:18:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:47.925 | select(.opcode=="crc32c") 00:30:47.925 | "\(.module_name) \(.executed)"' 00:30:48.183 11:18:16 -- host/digest.sh@94 -- # false 00:30:48.183 11:18:16 -- host/digest.sh@94 -- # exp_module=software 00:30:48.183 11:18:16 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:48.183 11:18:16 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:48.183 11:18:16 -- host/digest.sh@98 -- # killprocess 104624 00:30:48.183 11:18:16 -- common/autotest_common.sh@936 -- # '[' -z 104624 ']' 00:30:48.183 11:18:16 -- common/autotest_common.sh@940 -- # kill -0 104624 00:30:48.183 11:18:16 -- common/autotest_common.sh@941 -- # uname 00:30:48.183 11:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:48.183 11:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104624 00:30:48.183 11:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:48.183 killing process with pid 104624 00:30:48.183 11:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:48.183 11:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104624' 00:30:48.183 Received shutdown signal, test time was about 2.000000 seconds 00:30:48.183 00:30:48.183 Latency(us) 00:30:48.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.183 =================================================================================================================== 00:30:48.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.183 11:18:16 -- common/autotest_common.sh@955 -- # kill 104624 00:30:48.183 11:18:16 -- common/autotest_common.sh@960 -- # wait 104624 00:30:48.442 11:18:16 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:48.442 11:18:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:48.442 11:18:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:48.442 11:18:16 -- host/digest.sh@80 -- # rw=randwrite 00:30:48.442 11:18:16 -- host/digest.sh@80 -- # bs=4096 00:30:48.442 11:18:16 -- host/digest.sh@80 -- # qd=128 00:30:48.442 11:18:16 -- host/digest.sh@80 -- # scan_dsa=false 00:30:48.442 11:18:16 -- host/digest.sh@83 -- # bperfpid=104716 00:30:48.442 11:18:16 -- host/digest.sh@84 -- # waitforlisten 104716 /var/tmp/bperf.sock 00:30:48.442 11:18:16 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:48.442 11:18:16 -- common/autotest_common.sh@817 -- # '[' -z 104716 ']' 00:30:48.442 11:18:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:48.442 11:18:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:48.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:48.442 11:18:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:48.442 11:18:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:48.442 11:18:16 -- common/autotest_common.sh@10 -- # set +x 00:30:48.442 [2024-04-18 11:18:17.032936] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:48.442 [2024-04-18 11:18:17.033047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104716 ] 00:30:48.701 [2024-04-18 11:18:17.172510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.701 [2024-04-18 11:18:17.246777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.636 11:18:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:49.636 11:18:17 -- common/autotest_common.sh@850 -- # return 0 00:30:49.636 11:18:17 -- host/digest.sh@86 -- # false 00:30:49.636 11:18:17 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:49.636 11:18:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:49.893 11:18:18 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.893 11:18:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:50.151 nvme0n1 00:30:50.151 11:18:18 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:50.151 11:18:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:50.151 Running I/O for 2 seconds... 00:30:52.676 00:30:52.676 Latency(us) 00:30:52.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.676 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.676 nvme0n1 : 2.00 22086.65 86.28 0.00 0.00 5789.33 2874.65 9294.20 00:30:52.676 =================================================================================================================== 00:30:52.676 Total : 22086.65 86.28 0.00 0.00 5789.33 2874.65 9294.20 00:30:52.676 0 00:30:52.676 11:18:20 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:52.676 11:18:20 -- host/digest.sh@93 -- # get_accel_stats 00:30:52.676 11:18:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:52.676 11:18:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:52.676 | select(.opcode=="crc32c") 00:30:52.676 | "\(.module_name) \(.executed)"' 00:30:52.676 11:18:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:52.676 11:18:21 -- host/digest.sh@94 -- # false 00:30:52.676 11:18:21 -- host/digest.sh@94 -- # exp_module=software 00:30:52.676 11:18:21 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:52.676 11:18:21 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:52.676 11:18:21 -- host/digest.sh@98 -- # killprocess 104716 00:30:52.676 11:18:21 -- common/autotest_common.sh@936 -- # '[' -z 104716 ']' 00:30:52.676 11:18:21 -- common/autotest_common.sh@940 -- # kill -0 104716 00:30:52.676 11:18:21 -- common/autotest_common.sh@941 -- # uname 00:30:52.676 11:18:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:52.676 11:18:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104716 00:30:52.676 11:18:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:52.676 killing process with pid 104716 00:30:52.676 11:18:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:52.676 11:18:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104716' 00:30:52.676 Received shutdown signal, test time was about 2.000000 seconds 00:30:52.676 00:30:52.676 Latency(us) 00:30:52.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.676 =================================================================================================================== 00:30:52.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.676 11:18:21 -- common/autotest_common.sh@955 -- # kill 104716 00:30:52.676 11:18:21 -- common/autotest_common.sh@960 -- # wait 104716 00:30:52.676 11:18:21 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:52.676 11:18:21 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:52.676 11:18:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:52.676 11:18:21 -- host/digest.sh@80 -- # rw=randwrite 00:30:52.676 11:18:21 -- host/digest.sh@80 -- # bs=131072 00:30:52.676 11:18:21 -- host/digest.sh@80 -- # qd=16 00:30:52.676 11:18:21 -- host/digest.sh@80 -- # scan_dsa=false 00:30:52.676 11:18:21 -- host/digest.sh@83 -- # bperfpid=104801 00:30:52.676 11:18:21 -- host/digest.sh@84 -- # waitforlisten 104801 /var/tmp/bperf.sock 00:30:52.676 11:18:21 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:52.676 11:18:21 -- common/autotest_common.sh@817 -- # '[' -z 104801 ']' 00:30:52.676 11:18:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.676 11:18:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:52.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.676 11:18:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.676 11:18:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:52.676 11:18:21 -- common/autotest_common.sh@10 -- # set +x 00:30:52.934 [2024-04-18 11:18:21.357844] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:52.934 [2024-04-18 11:18:21.357938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104801 ] 00:30:52.934 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:52.934 Zero copy mechanism will not be used. 00:30:52.934 [2024-04-18 11:18:21.499770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.192 [2024-04-18 11:18:21.590907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.756 11:18:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:53.756 11:18:22 -- common/autotest_common.sh@850 -- # return 0 00:30:53.756 11:18:22 -- host/digest.sh@86 -- # false 00:30:53.756 11:18:22 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:53.756 11:18:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:54.014 11:18:22 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:54.014 11:18:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:54.578 nvme0n1 00:30:54.578 11:18:22 -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:54.578 11:18:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:54.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:54.578 Zero copy mechanism will not be used. 00:30:54.578 Running I/O for 2 seconds... 00:30:56.475 00:30:56.475 Latency(us) 00:30:56.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.475 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:56.475 nvme0n1 : 2.00 6227.39 778.42 0.00 0.00 2563.66 2219.29 8340.95 00:30:56.475 =================================================================================================================== 00:30:56.475 Total : 6227.39 778.42 0.00 0.00 2563.66 2219.29 8340.95 00:30:56.475 0 00:30:56.475 11:18:25 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:56.475 11:18:25 -- host/digest.sh@93 -- # get_accel_stats 00:30:56.475 11:18:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:56.475 11:18:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:56.475 | select(.opcode=="crc32c") 00:30:56.475 | "\(.module_name) \(.executed)"' 00:30:56.475 11:18:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:56.733 11:18:25 -- host/digest.sh@94 -- # false 00:30:56.733 11:18:25 -- host/digest.sh@94 -- # exp_module=software 00:30:56.733 11:18:25 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:56.733 11:18:25 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:56.733 11:18:25 -- host/digest.sh@98 -- # killprocess 104801 00:30:56.733 11:18:25 -- common/autotest_common.sh@936 -- # '[' -z 104801 ']' 00:30:56.733 11:18:25 -- common/autotest_common.sh@940 -- # kill -0 104801 00:30:56.733 11:18:25 -- common/autotest_common.sh@941 -- # uname 00:30:56.733 11:18:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:56.733 11:18:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104801 00:30:56.733 killing process with pid 104801 00:30:56.733 Received shutdown signal, test time was about 2.000000 seconds 00:30:56.733 00:30:56.733 Latency(us) 00:30:56.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.733 =================================================================================================================== 00:30:56.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:56.733 11:18:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:56.733 11:18:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:56.733 11:18:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104801' 00:30:56.733 11:18:25 -- common/autotest_common.sh@955 -- # kill 104801 00:30:56.733 11:18:25 -- common/autotest_common.sh@960 -- # wait 104801 00:30:56.991 11:18:25 -- host/digest.sh@132 -- # killprocess 104485 00:30:56.991 11:18:25 -- common/autotest_common.sh@936 -- # '[' -z 104485 ']' 00:30:56.991 11:18:25 -- common/autotest_common.sh@940 -- # kill -0 104485 00:30:56.991 11:18:25 -- common/autotest_common.sh@941 -- # uname 00:30:56.991 11:18:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:56.991 11:18:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104485 00:30:56.991 killing process with pid 104485 00:30:56.991 11:18:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:56.991 11:18:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:56.991 11:18:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104485' 00:30:56.991 11:18:25 -- common/autotest_common.sh@955 -- # kill 104485 00:30:56.991 11:18:25 -- common/autotest_common.sh@960 -- # wait 104485 00:30:57.260 ************************************ 00:30:57.260 END TEST nvmf_digest_clean 00:30:57.260 ************************************ 00:30:57.260 00:30:57.260 real 0m18.853s 00:30:57.260 user 0m36.240s 00:30:57.260 sys 0m4.555s 00:30:57.260 11:18:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:57.260 11:18:25 -- common/autotest_common.sh@10 -- # set +x 00:30:57.260 11:18:25 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:57.260 11:18:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:57.260 11:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:57.260 11:18:25 -- common/autotest_common.sh@10 -- # set +x 00:30:57.518 ************************************ 00:30:57.518 START TEST nvmf_digest_error 00:30:57.518 ************************************ 00:30:57.518 11:18:25 -- common/autotest_common.sh@1111 -- # run_digest_error 00:30:57.518 11:18:25 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:57.518 11:18:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:57.518 11:18:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:57.518 11:18:25 -- common/autotest_common.sh@10 -- # set +x 00:30:57.518 11:18:25 -- nvmf/common.sh@470 -- # nvmfpid=104918 00:30:57.518 11:18:25 -- nvmf/common.sh@471 -- # waitforlisten 104918 00:30:57.518 11:18:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:57.518 11:18:25 -- common/autotest_common.sh@817 -- # '[' -z 104918 ']' 00:30:57.518 11:18:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.518 11:18:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:57.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.518 11:18:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.518 11:18:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:57.518 11:18:25 -- common/autotest_common.sh@10 -- # set +x 00:30:57.518 [2024-04-18 11:18:25.995210] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:57.518 [2024-04-18 11:18:25.995290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.518 [2024-04-18 11:18:26.131880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.776 [2024-04-18 11:18:26.226823] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.776 [2024-04-18 11:18:26.226890] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.776 [2024-04-18 11:18:26.226917] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.776 [2024-04-18 11:18:26.226926] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.776 [2024-04-18 11:18:26.226933] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.776 [2024-04-18 11:18:26.226969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.709 11:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:58.709 11:18:27 -- common/autotest_common.sh@850 -- # return 0 00:30:58.709 11:18:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:58.709 11:18:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:58.709 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.709 11:18:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.709 11:18:27 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:58.709 11:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.709 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.709 [2024-04-18 11:18:27.043495] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:58.709 11:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.709 11:18:27 -- host/digest.sh@105 -- # common_target_config 00:30:58.709 11:18:27 -- host/digest.sh@43 -- # rpc_cmd 00:30:58.709 11:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.709 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.709 null0 00:30:58.709 [2024-04-18 11:18:27.153764] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.709 [2024-04-18 11:18:27.177929] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.709 11:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.709 11:18:27 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:58.709 11:18:27 -- host/digest.sh@54 -- # local rw bs qd 00:30:58.709 11:18:27 -- host/digest.sh@56 -- # rw=randread 00:30:58.709 11:18:27 -- host/digest.sh@56 -- # bs=4096 00:30:58.709 11:18:27 -- host/digest.sh@56 -- # qd=128 00:30:58.709 11:18:27 -- host/digest.sh@58 -- # bperfpid=104968 00:30:58.709 11:18:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:58.709 11:18:27 -- host/digest.sh@60 -- # waitforlisten 104968 /var/tmp/bperf.sock 00:30:58.709 11:18:27 -- common/autotest_common.sh@817 -- # '[' -z 104968 ']' 00:30:58.709 11:18:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:58.709 11:18:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:58.709 11:18:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:58.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:58.709 11:18:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:58.709 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.709 [2024-04-18 11:18:27.232916] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:30:58.709 [2024-04-18 11:18:27.233038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104968 ] 00:30:58.966 [2024-04-18 11:18:27.371134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.966 [2024-04-18 11:18:27.459456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.966 11:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:58.966 11:18:27 -- common/autotest_common.sh@850 -- # return 0 00:30:58.966 11:18:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:58.966 11:18:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:59.223 11:18:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:59.223 11:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.223 11:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:59.223 11:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.223 11:18:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.223 11:18:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.480 nvme0n1 00:30:59.480 11:18:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:59.480 11:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.480 11:18:28 -- common/autotest_common.sh@10 -- # set +x 00:30:59.480 11:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.480 11:18:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:59.480 11:18:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.739 Running I/O for 2 seconds... 00:30:59.739 [2024-04-18 11:18:28.222972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.223069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.223087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.236241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.236297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.236312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.251418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.251466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.251481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.263150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.263225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.263240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.275673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.275725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.275740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.290402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.290459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.290474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.303571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.303641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.303671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.319051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.319151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.319193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.331106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.331173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.331229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.345424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.345483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.345498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.361055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.361120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.739 [2024-04-18 11:18:28.361135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.739 [2024-04-18 11:18:28.375006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.739 [2024-04-18 11:18:28.375116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.740 [2024-04-18 11:18:28.375133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.997 [2024-04-18 11:18:28.387371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.997 [2024-04-18 11:18:28.387427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.997 [2024-04-18 11:18:28.387442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.997 [2024-04-18 11:18:28.400298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.997 [2024-04-18 11:18:28.400353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.997 [2024-04-18 11:18:28.400367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.997 [2024-04-18 11:18:28.415458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.997 [2024-04-18 11:18:28.415517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.997 [2024-04-18 11:18:28.415532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.997 [2024-04-18 11:18:28.429230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.997 [2024-04-18 11:18:28.429302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.997 [2024-04-18 11:18:28.429332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.997 [2024-04-18 11:18:28.442309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.997 [2024-04-18 11:18:28.442357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.997 [2024-04-18 11:18:28.442372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.456557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.456619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.456650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.469306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.469395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.469410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.484507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.484581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.484613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.498633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.498694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.498723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.512858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.512938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.512969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.527584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.527656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.527686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.540386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.540445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.540491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.555453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.555499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.555514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.568794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.568865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.568895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.584524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.584582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.584613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.598294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.598334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.598348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.612466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.612520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.612533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:59.998 [2024-04-18 11:18:28.627425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:30:59.998 [2024-04-18 11:18:28.627464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.998 [2024-04-18 11:18:28.627477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.256 [2024-04-18 11:18:28.639148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.256 [2024-04-18 11:18:28.639196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.256 [2024-04-18 11:18:28.639218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.256 [2024-04-18 11:18:28.653018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.256 [2024-04-18 11:18:28.653079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.256 [2024-04-18 11:18:28.653109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.256 [2024-04-18 11:18:28.666952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.256 [2024-04-18 11:18:28.667006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.256 [2024-04-18 11:18:28.667019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.256 [2024-04-18 11:18:28.684034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.256 [2024-04-18 11:18:28.684114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.256 [2024-04-18 11:18:28.684144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.256 [2024-04-18 11:18:28.698050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.256 [2024-04-18 11:18:28.698097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.698112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.711449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.711491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.711505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.724618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.724684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.724715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.738178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.738232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.738261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.752662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.752718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.752747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.766882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.766937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.766966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.779734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.779791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.779821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.793084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.793137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.793166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.806354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.806412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.806426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.818944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.818996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.819026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.833422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.833491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.833519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.846229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.846281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.846310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.858736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.858791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.858820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.874494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.874547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.874577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.257 [2024-04-18 11:18:28.888979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.257 [2024-04-18 11:18:28.889019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.257 [2024-04-18 11:18:28.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:28.902358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:28.902396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:28.902409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:28.917046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:28.917108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:28.917123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:28.931688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:28.931802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:28.931816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:28.946836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:28.946890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:28.946919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:28.960877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:28.960932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:28.960961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:28.972405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:28.972475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:28.972505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:28.987239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:28.987295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:28.987309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.001147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.001199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.001229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.014404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.014473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.014502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.028859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.028913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.040437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.040498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.040528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.054878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.054940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.054971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.069899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.069973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.070005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.082640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.082696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.082726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.097742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.097793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.097808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.110456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.110513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.110543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.124445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.124506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.124520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.139586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.139634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.139649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.516 [2024-04-18 11:18:29.152563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.516 [2024-04-18 11:18:29.152605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.516 [2024-04-18 11:18:29.152618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.167123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.775 [2024-04-18 11:18:29.167175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.775 [2024-04-18 11:18:29.167230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.180746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.775 [2024-04-18 11:18:29.180800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.775 [2024-04-18 11:18:29.180829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.192965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.775 [2024-04-18 11:18:29.193041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.775 [2024-04-18 11:18:29.193087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.207086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.775 [2024-04-18 11:18:29.207136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.775 [2024-04-18 11:18:29.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.219802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.775 [2024-04-18 11:18:29.219853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.775 [2024-04-18 11:18:29.219882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.234160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.775 [2024-04-18 11:18:29.234211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.775 [2024-04-18 11:18:29.234241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.245970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.775 [2024-04-18 11:18:29.246022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.775 [2024-04-18 11:18:29.246077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.775 [2024-04-18 11:18:29.257559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.257610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.257639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.271731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.271782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.271811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.283789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.283839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.283867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.296079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.296140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.296183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.307554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.307606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.307636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.321419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.321477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.321513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.333736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.333787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.333816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.345577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.345616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.345629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.359121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.359172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.359225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.372856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.372912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.372940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.384767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.384817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.384854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.398540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.398589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.398617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:00.776 [2024-04-18 11:18:29.411576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:00.776 [2024-04-18 11:18:29.411647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.776 [2024-04-18 11:18:29.411661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.424574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.424626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.424655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.437197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.437250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.437280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.452077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.452141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.452171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.466094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.466163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.466192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.478790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.478842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.478871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.491123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.491173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.491226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.504922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.504987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.505017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.518292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.518347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.518375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.530206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.530257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.530286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.542528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.542581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.542609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.557120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.557171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.557199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.569307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.569360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.569389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.581463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.581523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.581567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.592776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.592827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.592863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.607247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.607285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.607298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.618688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.618741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.618770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.632176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.632237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.632265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.645840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.645894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.645924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.659309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.035 [2024-04-18 11:18:29.659348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.035 [2024-04-18 11:18:29.659361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.035 [2024-04-18 11:18:29.673573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.036 [2024-04-18 11:18:29.673627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.036 [2024-04-18 11:18:29.673641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.687555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.687608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.687638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.699104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.699157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.699170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.713126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.713177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.713206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.728171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.728223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.728236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.740344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.740396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.740425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.755781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.755881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.769153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.769204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.769233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.783417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.783457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.783470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.796843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.796905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.796934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.809329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.809381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.809411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.821483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.821527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.821543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.834938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.834992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.835022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.846157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.846208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.846238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.860466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.860533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.860563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.875315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.875353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.875367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.889017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.889083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.889113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.902383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.902436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.902466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.915730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.915783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.915812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.295 [2024-04-18 11:18:29.930068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.295 [2024-04-18 11:18:29.930131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.295 [2024-04-18 11:18:29.930161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.554 [2024-04-18 11:18:29.944407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.554 [2024-04-18 11:18:29.944460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.554 [2024-04-18 11:18:29.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.554 [2024-04-18 11:18:29.956171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.554 [2024-04-18 11:18:29.956233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.554 [2024-04-18 11:18:29.956264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.554 [2024-04-18 11:18:29.971400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:29.971440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:29.971454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:29.984927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:29.984983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:29.985013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:29.999684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:29.999752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:29.999782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.014063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.014125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.014155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.026960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.027012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.027042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.040072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.040120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.040134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.052857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.052909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.052939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.067544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.067584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.067598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.081789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.081827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.081840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.094458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.094496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.094509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.108528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.108580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.108610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.122835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.122874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.122888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.137921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.137974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.138003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.152717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.152771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.152785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.166369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.166421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.166451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.179239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.179278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.179291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.555 [2024-04-18 11:18:30.191908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.555 [2024-04-18 11:18:30.191991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.555 [2024-04-18 11:18:30.192006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.813 [2024-04-18 11:18:30.205696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x65dcb0) 00:31:01.813 [2024-04-18 11:18:30.205735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.813 [2024-04-18 11:18:30.205749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:01.813 00:31:01.813 Latency(us) 00:31:01.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.813 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:01.813 nvme0n1 : 2.00 18753.01 73.25 0.00 0.00 6818.08 3455.53 18350.08 00:31:01.813 =================================================================================================================== 00:31:01.813 Total : 18753.01 73.25 0.00 0.00 6818.08 3455.53 18350.08 00:31:01.813 0 00:31:01.813 11:18:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:01.813 11:18:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:01.813 11:18:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:01.813 11:18:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:01.813 | .driver_specific 00:31:01.813 | .nvme_error 00:31:01.813 | .status_code 00:31:01.813 | .command_transient_transport_error' 00:31:02.072 11:18:30 -- host/digest.sh@71 -- # (( 147 > 0 )) 00:31:02.072 11:18:30 -- host/digest.sh@73 -- # killprocess 104968 00:31:02.072 11:18:30 -- common/autotest_common.sh@936 -- # '[' -z 104968 ']' 00:31:02.072 11:18:30 -- common/autotest_common.sh@940 -- # kill -0 104968 00:31:02.072 11:18:30 -- common/autotest_common.sh@941 -- # uname 00:31:02.072 11:18:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:02.072 11:18:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104968 00:31:02.072 killing process with pid 104968 00:31:02.072 Received shutdown signal, test time was about 2.000000 seconds 00:31:02.072 00:31:02.072 Latency(us) 00:31:02.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.072 =================================================================================================================== 00:31:02.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.072 11:18:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:02.072 11:18:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:02.072 11:18:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104968' 00:31:02.072 11:18:30 -- common/autotest_common.sh@955 -- # kill 104968 00:31:02.072 11:18:30 -- common/autotest_common.sh@960 -- # wait 104968 00:31:02.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.330 11:18:30 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:02.330 11:18:30 -- host/digest.sh@54 -- # local rw bs qd 00:31:02.330 11:18:30 -- host/digest.sh@56 -- # rw=randread 00:31:02.330 11:18:30 -- host/digest.sh@56 -- # bs=131072 00:31:02.330 11:18:30 -- host/digest.sh@56 -- # qd=16 00:31:02.330 11:18:30 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:02.330 11:18:30 -- host/digest.sh@58 -- # bperfpid=105039 00:31:02.330 11:18:30 -- host/digest.sh@60 -- # waitforlisten 105039 /var/tmp/bperf.sock 00:31:02.330 11:18:30 -- common/autotest_common.sh@817 -- # '[' -z 105039 ']' 00:31:02.330 11:18:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.330 11:18:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:02.330 11:18:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.330 11:18:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:02.330 11:18:30 -- common/autotest_common.sh@10 -- # set +x 00:31:02.330 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:02.330 Zero copy mechanism will not be used. 00:31:02.330 [2024-04-18 11:18:30.785025] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:02.331 [2024-04-18 11:18:30.785126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105039 ] 00:31:02.331 [2024-04-18 11:18:30.918630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.589 [2024-04-18 11:18:31.010428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.153 11:18:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:03.153 11:18:31 -- common/autotest_common.sh@850 -- # return 0 00:31:03.153 11:18:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:03.153 11:18:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:03.410 11:18:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:03.410 11:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.411 11:18:32 -- common/autotest_common.sh@10 -- # set +x 00:31:03.411 11:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.411 11:18:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.411 11:18:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.668 nvme0n1 00:31:03.927 11:18:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:03.927 11:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.927 11:18:32 -- common/autotest_common.sh@10 -- # set +x 00:31:03.927 11:18:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.927 11:18:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:03.927 11:18:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.927 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.927 Zero copy mechanism will not be used. 00:31:03.927 Running I/O for 2 seconds... 00:31:03.927 [2024-04-18 11:18:32.444101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.444164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.444196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.449555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.449596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.449610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.454734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.454789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.454820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.459437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.459477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.459492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.462893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.462959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.462988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.467052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.467115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.467144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.471856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.471897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.471926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.475215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.475254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.475268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.479791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.479847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.479860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.484728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.484782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.484811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.489057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.489111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.489139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.492584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.492638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.492666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.496449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.496501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.496529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.499989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.500069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.500084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.503229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.503268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.503281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.506878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.506930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.506958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.510498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.510580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.514720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.514773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.514802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.518587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.518625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.518639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.521935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.521986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.522015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.525984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.526051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.927 [2024-04-18 11:18:32.526065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.927 [2024-04-18 11:18:32.529524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.927 [2024-04-18 11:18:32.529564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.529577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.533024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.533086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.533116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.536936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.536976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.536989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.540474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.540529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.540558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.544119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.544171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.544199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.549020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.549086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.549115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.552306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.552359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.552387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.556480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.556517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.556530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.560765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.560820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.560833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:03.928 [2024-04-18 11:18:32.565389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:03.928 [2024-04-18 11:18:32.565430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.928 [2024-04-18 11:18:32.565444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.188 [2024-04-18 11:18:32.569408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.188 [2024-04-18 11:18:32.569464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.188 [2024-04-18 11:18:32.569493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.188 [2024-04-18 11:18:32.573778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.188 [2024-04-18 11:18:32.573820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.188 [2024-04-18 11:18:32.573833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.577485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.577539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.577569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.580700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.580755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.580768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.584660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.584714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.584728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.588972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.589056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.589071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.592390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.592429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.592442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.596739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.596793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.596822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.601513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.601552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.601566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.604761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.604801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.604831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.609573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.609614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.609627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.614399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.614455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.614469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.618240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.618293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.618322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.621290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.621346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.621375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.625560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.625601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.625614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.629611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.629650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.629663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.633957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.634011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.634041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.637483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.637540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.641153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.641206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.641235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.645013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.645078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.645092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.648930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.648969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.648982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.653175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.653213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.653226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.656995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.657075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.657089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.661198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.661250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.661278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.664807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.664860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.664889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.668739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.668794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.668823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.673142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.673206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.673238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.677150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.677213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.677242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.681158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.189 [2024-04-18 11:18:32.681210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.189 [2024-04-18 11:18:32.681238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.189 [2024-04-18 11:18:32.685057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.685120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.685149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.689078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.689181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.689210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.693131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.693184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.693213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.697232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.697287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.697316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.701829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.701888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.701904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.705625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.705666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.705679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.709302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.709356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.709385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.713491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.713546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.713559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.716482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.716537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.716550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.720020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.720092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.720106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.724374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.724412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.724426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.729319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.729358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.729371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.733153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.733192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.733206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.737751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.737818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.737834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.742828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.742884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.742914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.747553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.747610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.747640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.751558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.751637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.751668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.755409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.755450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.755464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.759829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.759881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.759909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.763826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.763879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.763907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.767885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.767966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.771702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.771770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.771797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.775546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.775600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.775629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.780483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.780549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.780579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.784525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.784579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.784609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.788793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.788847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.788875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.792598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.190 [2024-04-18 11:18:32.792650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.190 [2024-04-18 11:18:32.792694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.190 [2024-04-18 11:18:32.795956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.796010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.796039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.799700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.799766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.799795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.804323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.804362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.804391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.807828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.807881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.807910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.812026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.812088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.812117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.816174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.816212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.816241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.819509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.819592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.819621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.823171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.823221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.823234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.191 [2024-04-18 11:18:32.827047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.191 [2024-04-18 11:18:32.827127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.191 [2024-04-18 11:18:32.827157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.451 [2024-04-18 11:18:32.830977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.451 [2024-04-18 11:18:32.831060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-04-18 11:18:32.831075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.451 [2024-04-18 11:18:32.835475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.451 [2024-04-18 11:18:32.835517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-04-18 11:18:32.835531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.451 [2024-04-18 11:18:32.839409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.451 [2024-04-18 11:18:32.839450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.839464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.842578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.842632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.842661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.846533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.846586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.846614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.850601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.850671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.850701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.854186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.854254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.854282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.857746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.857800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.857829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.862192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.862230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.862243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.866173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.866227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.866255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.869192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.869229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.869258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.873921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.873960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.873972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.878564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.878617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.878655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.881379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.881414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.881443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.885587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.885639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.885669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.889269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.889308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.889321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.893380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.893420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.893449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.896982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.897060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.897075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.901201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.901240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.901269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.904442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.904494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.904523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.908621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.908676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.908704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.912945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.912999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.913028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.916267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.916320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.919845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.919899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.919911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.923550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.923588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.923600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.927294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.927353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.927376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.931688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.452 [2024-04-18 11:18:32.931729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.452 [2024-04-18 11:18:32.931743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.452 [2024-04-18 11:18:32.936160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.936201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.936215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.940071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.940111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.940125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.944217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.944255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.944268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.948148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.948187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.948200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.951551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.951605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.951618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.955876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.955930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.955959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.960830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.960884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.960914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.965062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.965110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.965123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.967741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.967793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.967822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.972801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.972839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.972884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.977877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.977933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.977963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.980827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.980879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.980907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.985626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.985682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.985696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.989828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.989887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.989916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.993569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.993607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.993652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:32.999477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:32.999583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:32.999613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.003173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.003240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.003253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.008370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.008413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.008427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.013860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.013952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.013969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.020750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.020855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.020872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.025175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.025282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.025301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.029694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.029813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.029833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.036232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.036374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.036393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.041051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.041175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.041195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.047608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.047732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.047750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.052760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.052863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.052880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.058464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.453 [2024-04-18 11:18:33.058570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.453 [2024-04-18 11:18:33.058586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.453 [2024-04-18 11:18:33.064265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.454 [2024-04-18 11:18:33.064379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.454 [2024-04-18 11:18:33.064406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.454 [2024-04-18 11:18:33.070416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.454 [2024-04-18 11:18:33.070528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.454 [2024-04-18 11:18:33.070545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.454 [2024-04-18 11:18:33.074689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.454 [2024-04-18 11:18:33.074796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.454 [2024-04-18 11:18:33.074813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.454 [2024-04-18 11:18:33.079467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.454 [2024-04-18 11:18:33.079566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.454 [2024-04-18 11:18:33.079582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.454 [2024-04-18 11:18:33.084918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.454 [2024-04-18 11:18:33.085026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.454 [2024-04-18 11:18:33.085062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.454 [2024-04-18 11:18:33.089556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.454 [2024-04-18 11:18:33.089663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.454 [2024-04-18 11:18:33.089680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.713 [2024-04-18 11:18:33.096816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.713 [2024-04-18 11:18:33.096980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.713 [2024-04-18 11:18:33.097014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.713 [2024-04-18 11:18:33.105305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.105433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.105451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.112781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.112900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.112917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.120543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.120673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.120694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.125606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.125737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.125755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.135273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.135383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.135401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.143798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.143883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.143903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.150103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.150194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.150213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.157114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.157214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.162346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.162463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.162482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.168952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.169082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.169101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.176049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.176160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.176180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.180455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.180579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.180601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.189114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.189252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.189273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.197839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.197981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.198001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.205376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.205507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.205526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.211293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.211428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.211449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.218900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.219021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.219058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.223825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.223951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.223971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.232265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.232394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.232415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.237819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.237974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.237999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.244510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.244628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.244646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.254016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.254152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.254173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.264145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.264267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.264285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.272931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.273090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.273109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.278270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.278381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.278399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.285343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.285477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.285496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.292895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.293021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.293056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.300110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.300262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.300305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.306700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.306824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.306843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.315784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.315908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.315928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.322354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.322525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.322548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.331104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.331244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.331264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.340719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.340846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.340866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.714 [2024-04-18 11:18:33.348676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.714 [2024-04-18 11:18:33.348774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.714 [2024-04-18 11:18:33.348793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.355490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.355594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.355611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.359669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.359780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.359798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.364074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.364185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.364203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.369605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.369697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.369712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.374942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.375044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.375060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.378244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.378305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.378336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.382660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.382717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.382748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.386951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.387010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.387040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.390122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.390165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.390195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.394290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.394346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.397984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.398065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.398079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.402081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.402137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.402167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.406131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.406186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.406215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.409948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.410002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.410032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.414105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.414160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.414190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.418407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.418461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.418490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.423109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.423167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.423208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.426275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.426330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.426359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.430521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.430574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.430604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.434406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.434466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.434480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.438611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.438666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.438696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.443774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.443832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.443862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.447161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.447276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.447305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.452152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.452209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.452222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.457298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.975 [2024-04-18 11:18:33.457359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.975 [2024-04-18 11:18:33.457389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.975 [2024-04-18 11:18:33.462163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.462217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.462247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.464946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.464998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.465027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.469104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.469160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.469190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.473385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.473438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.473467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.476842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.476897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.476927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.480750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.480806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.480836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.485101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.485155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.485186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.488715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.488771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.488801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.493174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.493216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.493245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.497560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.497615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.497644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.500849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.500906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.500935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.504696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.504751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.504780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.508967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.509024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.509067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.512821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.512878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.512908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.517076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.517128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.517157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.521082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.521136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.521165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.525606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.525663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.525693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.529549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.529604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.529634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.533526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.533581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.533611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.537669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.537723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.537752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.541373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.541425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.541454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.546219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.546279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.546309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.550020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.550088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.550119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.554241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.554318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.554351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.559453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.559500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.559520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.563578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.563622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.563652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.976 [2024-04-18 11:18:33.567703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.976 [2024-04-18 11:18:33.567775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.976 [2024-04-18 11:18:33.567806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.573140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.573185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.573199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.576385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.576444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.576458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.580475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.580531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.580544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.585187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.585230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.585245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.588781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.588837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.588850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.593243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.593283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.593296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.598327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.598391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.598405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.603425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.603469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.603483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.607962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.608025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.608051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:04.977 [2024-04-18 11:18:33.610800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:04.977 [2024-04-18 11:18:33.610857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.977 [2024-04-18 11:18:33.610870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.616225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.616293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.616308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.619452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.619497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.619510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.623509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.623559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.623573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.627449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.627505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.627520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.631421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.631475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.631490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.636144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.636216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.636231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.640963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.641052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.641068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.644661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.644709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.644723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.648388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.648455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.648470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.652105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.652175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.652189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.656827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.656899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.656914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.236 [2024-04-18 11:18:33.661403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.236 [2024-04-18 11:18:33.661484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.236 [2024-04-18 11:18:33.661500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.664833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.664902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.664917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.668998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.669076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.669092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.673490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.673563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.673578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.678215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.678273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.678288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.681315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.681376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.681390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.685268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.685341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.685355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.689901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.689974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.689988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.694531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.694605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.694620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.697396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.697457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.697471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.703141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.703221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.703237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.706887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.706947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.706963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.711343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.711402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.711417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.714950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.715000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.715016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.718724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.718783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.718798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.722650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.722707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.722721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.727427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.727486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.727501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.731112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.731168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.731193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.734588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.734637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.734652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.739414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.739471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.739486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.743920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.743976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.743990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.747601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.747647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.747662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.752138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.752195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.752211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.756086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.756135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.756149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.759918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.759971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.759986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.764671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.764729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.764745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.768553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.768602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.768616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.773297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.237 [2024-04-18 11:18:33.773357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.237 [2024-04-18 11:18:33.773372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.237 [2024-04-18 11:18:33.777273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.777326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.777341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.781875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.781961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.781987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.786705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.786760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.786775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.790864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.790917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.790931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.794582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.794650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.794665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.799299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.799353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.799369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.804839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.804947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.804974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.809374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.809439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.809454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.813374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.813453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.813469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.818090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.818148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.818163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.821905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.821969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.821983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.825966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.826020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.826046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.829897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.829950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.829965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.833894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.833954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.833969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.838791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.838857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.838872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.842530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.842583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.842597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.847128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.847201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.847218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.850453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.850503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.850516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.855015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.855081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.855097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.858655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.858716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.858730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.862315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.862368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.862382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.866734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.866789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.866803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.872292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.872334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.872348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.238 [2024-04-18 11:18:33.875512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.238 [2024-04-18 11:18:33.875556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.238 [2024-04-18 11:18:33.875571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.497 [2024-04-18 11:18:33.880234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.880285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.880301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.885314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.885359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.885372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.888777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.888821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.888834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.892656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.892699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.892713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.897394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.897434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.897448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.900959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.901001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.901015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.905218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.905258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.905271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.909918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.909962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.909975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.914613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.914656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.914670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.918255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.918292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.918305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.922928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.922970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.922983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.928258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.928300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.928313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.934611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.934678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.934704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.943482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.943550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.943593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.946808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.946848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.946862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.950968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.951011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.951024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.956236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.956276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.956290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.963783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.963849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.963878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.970843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.970887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.970901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.977253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.977335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.977359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.983483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.983527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.983541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.986625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.986663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.986676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.990796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.990840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.990854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.995085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.995123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.995137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:33.998752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:33.998827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:33.998848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:34.003080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.498 [2024-04-18 11:18:34.003118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.498 [2024-04-18 11:18:34.003131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.498 [2024-04-18 11:18:34.006874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.006913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.006927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.010909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.010949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.010962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.015608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.015665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.015680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.020174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.020216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.020230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.023981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.024023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.024053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.028404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.028444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.028457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.032442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.032481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.032494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.035525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.035564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.035576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.039626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.039665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.039678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.043994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.044044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.044058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.046900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.046937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.046951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.050627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.050665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.050677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.054541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.054580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.054593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.059041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.059077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.059090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.062572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.062613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.062626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.066050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.066090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.066102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.070967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.071045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.071062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.075630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.075670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.075683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.078908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.078946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.078960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.084204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.084245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.084258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.088563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.088604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.088618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.092866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.092934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.092955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.096521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.096560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.096574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.100456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.100495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.100509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.104785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.104823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.104837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.109444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.109484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.109498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.112593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.112639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.499 [2024-04-18 11:18:34.112654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.499 [2024-04-18 11:18:34.117432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.499 [2024-04-18 11:18:34.117474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.500 [2024-04-18 11:18:34.117488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.500 [2024-04-18 11:18:34.122662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.500 [2024-04-18 11:18:34.122702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.500 [2024-04-18 11:18:34.122715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.500 [2024-04-18 11:18:34.127281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.500 [2024-04-18 11:18:34.127322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.500 [2024-04-18 11:18:34.127341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.500 [2024-04-18 11:18:34.130113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.500 [2024-04-18 11:18:34.130150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.500 [2024-04-18 11:18:34.130162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.500 [2024-04-18 11:18:34.134490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.500 [2024-04-18 11:18:34.134531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.500 [2024-04-18 11:18:34.134545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.139410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.139451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.139465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.143379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.143419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.143433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.146598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.146637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.146651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.150366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.150406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.150419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.154375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.154420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.154434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.158743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.158784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.158797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.162225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.162265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.162278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.166712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.166751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.166764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.170564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.170608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.170621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.174767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.174804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.174818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.178911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.178949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.178962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.182766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.182804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.182829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.187305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.187345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.187358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.191328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.191367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.191380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.195429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.195467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.195480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.199134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.199202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.199252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.203514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.203555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.203568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.208353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.208394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.208407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.211161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.211211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.211225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.215513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.215551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.215564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.219330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.759 [2024-04-18 11:18:34.219370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.759 [2024-04-18 11:18:34.219383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.759 [2024-04-18 11:18:34.223777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.223817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.223830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.227345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.227386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.227399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.231696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.231735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.231748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.235391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.235429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.235443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.239454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.239493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.239506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.243309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.243347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.243360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.248508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.248547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.248560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.252134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.252171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.252184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.256523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.256562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.256574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.261027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.261077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.261089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.266178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.266218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.266231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.272050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.272111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.272129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.275915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.275958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.275972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.280385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.280424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.280438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.285563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.285602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.285615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.289927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.289967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.289980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.292888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.292927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.292939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.297156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.297195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.297208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.301553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.301593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.301606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.304704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.304743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.304756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.309392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.309432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.309445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.312822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.312861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.312874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.317184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.317223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.317236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.320204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.320242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.320255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.324615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.324663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.324684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.328643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.328712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.328729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.332359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.332399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.760 [2024-04-18 11:18:34.332413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.760 [2024-04-18 11:18:34.336326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.760 [2024-04-18 11:18:34.336366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.336379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.339785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.339824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.339837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.343587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.343627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.343640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.347536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.347576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.347590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.352451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.352500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.352514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.356762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.356802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.356816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.360120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.360159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.360173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.364583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.364631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.364645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.367945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.367988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.368002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.373062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.373102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.373115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.376778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.376818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.376831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.380977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.381017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.381042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.384154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.384194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.384207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.388638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.388678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.388691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.392371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.392411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.392424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:05.761 [2024-04-18 11:18:34.396897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:05.761 [2024-04-18 11:18:34.396939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.761 [2024-04-18 11:18:34.396952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.401875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.401916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.401929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.405310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.405351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.405364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.409764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.409803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.409816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.414795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.414835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.414848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.419051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.419093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.419106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.422465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.422503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.422516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.426555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.426594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.426608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.430735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.430773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.430786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:06.019 [2024-04-18 11:18:34.434269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x188c9a0) 00:31:06.019 [2024-04-18 11:18:34.434309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.019 [2024-04-18 11:18:34.434322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:06.019 00:31:06.019 Latency(us) 00:31:06.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.019 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:06.019 nvme0n1 : 2.00 6927.39 865.92 0.00 0.00 2305.44 584.61 10545.34 00:31:06.019 =================================================================================================================== 00:31:06.019 Total : 6927.39 865.92 0.00 0.00 2305.44 584.61 10545.34 00:31:06.019 0 00:31:06.020 11:18:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:06.020 11:18:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:06.020 11:18:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:06.020 | .driver_specific 00:31:06.020 | .nvme_error 00:31:06.020 | .status_code 00:31:06.020 | .command_transient_transport_error' 00:31:06.020 11:18:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:06.278 11:18:34 -- host/digest.sh@71 -- # (( 447 > 0 )) 00:31:06.278 11:18:34 -- host/digest.sh@73 -- # killprocess 105039 00:31:06.278 11:18:34 -- common/autotest_common.sh@936 -- # '[' -z 105039 ']' 00:31:06.278 11:18:34 -- common/autotest_common.sh@940 -- # kill -0 105039 00:31:06.278 11:18:34 -- common/autotest_common.sh@941 -- # uname 00:31:06.278 11:18:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:06.278 11:18:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105039 00:31:06.278 11:18:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:06.278 11:18:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:06.278 11:18:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105039' 00:31:06.278 killing process with pid 105039 00:31:06.278 Received shutdown signal, test time was about 2.000000 seconds 00:31:06.278 00:31:06.278 Latency(us) 00:31:06.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.278 =================================================================================================================== 00:31:06.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:06.278 11:18:34 -- common/autotest_common.sh@955 -- # kill 105039 00:31:06.278 11:18:34 -- common/autotest_common.sh@960 -- # wait 105039 00:31:06.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:06.537 11:18:34 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:06.537 11:18:34 -- host/digest.sh@54 -- # local rw bs qd 00:31:06.537 11:18:34 -- host/digest.sh@56 -- # rw=randwrite 00:31:06.537 11:18:34 -- host/digest.sh@56 -- # bs=4096 00:31:06.537 11:18:34 -- host/digest.sh@56 -- # qd=128 00:31:06.537 11:18:34 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:06.537 11:18:34 -- host/digest.sh@58 -- # bperfpid=105125 00:31:06.537 11:18:34 -- host/digest.sh@60 -- # waitforlisten 105125 /var/tmp/bperf.sock 00:31:06.537 11:18:34 -- common/autotest_common.sh@817 -- # '[' -z 105125 ']' 00:31:06.537 11:18:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:06.537 11:18:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:06.537 11:18:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:06.537 11:18:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:06.537 11:18:34 -- common/autotest_common.sh@10 -- # set +x 00:31:06.537 [2024-04-18 11:18:34.982440] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:06.537 [2024-04-18 11:18:34.982557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105125 ] 00:31:06.537 [2024-04-18 11:18:35.118537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.794 [2024-04-18 11:18:35.212267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.794 11:18:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:06.794 11:18:35 -- common/autotest_common.sh@850 -- # return 0 00:31:06.794 11:18:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:06.794 11:18:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:07.051 11:18:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:07.051 11:18:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.051 11:18:35 -- common/autotest_common.sh@10 -- # set +x 00:31:07.051 11:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.051 11:18:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:07.052 11:18:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:07.618 nvme0n1 00:31:07.618 11:18:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:07.618 11:18:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.618 11:18:35 -- common/autotest_common.sh@10 -- # set +x 00:31:07.618 11:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.618 11:18:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:07.618 11:18:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.618 Running I/O for 2 seconds... 00:31:07.618 [2024-04-18 11:18:36.104642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ee5c8 00:31:07.618 [2024-04-18 11:18:36.105662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.105720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.116435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e2c28 00:31:07.618 [2024-04-18 11:18:36.117244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.117298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.131779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f7538 00:31:07.618 [2024-04-18 11:18:36.133659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.133693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.143263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e2c28 00:31:07.618 [2024-04-18 11:18:36.144809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.144858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.154599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f8e88 00:31:07.618 [2024-04-18 11:18:36.156095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.156133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.165991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e5a90 00:31:07.618 [2024-04-18 11:18:36.167363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.167400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.177251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ea248 00:31:07.618 [2024-04-18 11:18:36.178370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.178417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.188978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e27f0 00:31:07.618 [2024-04-18 11:18:36.190033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.190090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.200738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190dfdc0 00:31:07.618 [2024-04-18 11:18:36.201743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.201778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.215493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f2510 00:31:07.618 [2024-04-18 11:18:36.217197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.217232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.227036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e8d30 00:31:07.618 [2024-04-18 11:18:36.228296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.228333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.239219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e1f80 00:31:07.618 [2024-04-18 11:18:36.240612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.240644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:07.618 [2024-04-18 11:18:36.254125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190de038 00:31:07.618 [2024-04-18 11:18:36.256156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.618 [2024-04-18 11:18:36.256213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.876 [2024-04-18 11:18:36.262812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190eb760 00:31:07.876 [2024-04-18 11:18:36.263734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.876 [2024-04-18 11:18:36.263768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:07.876 [2024-04-18 11:18:36.278296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f46d0 00:31:07.876 [2024-04-18 11:18:36.280343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.876 [2024-04-18 11:18:36.280381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:07.876 [2024-04-18 11:18:36.290279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fb8b8 00:31:07.876 [2024-04-18 11:18:36.292231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.876 [2024-04-18 11:18:36.292266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:07.876 [2024-04-18 11:18:36.298921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fb8b8 00:31:07.876 [2024-04-18 11:18:36.299831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.876 [2024-04-18 11:18:36.299866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:07.876 [2024-04-18 11:18:36.312552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e0a68 00:31:07.877 [2024-04-18 11:18:36.313844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.313895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.324068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f46d0 00:31:07.877 [2024-04-18 11:18:36.325145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.325180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.335476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6458 00:31:07.877 [2024-04-18 11:18:36.336405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.336442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.346636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f3e60 00:31:07.877 [2024-04-18 11:18:36.347401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.347438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.360536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190dece0 00:31:07.877 [2024-04-18 11:18:36.361526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.361562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.372560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fa3a0 00:31:07.877 [2024-04-18 11:18:36.373743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.373778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.384794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e3060 00:31:07.877 [2024-04-18 11:18:36.386060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.386123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.397033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ee190 00:31:07.877 [2024-04-18 11:18:36.397831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.397863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.408744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fda78 00:31:07.877 [2024-04-18 11:18:36.409420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.409456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.421180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0350 00:31:07.877 [2024-04-18 11:18:36.421926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.421964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.432561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e38d0 00:31:07.877 [2024-04-18 11:18:36.433215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.433249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.446675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0350 00:31:07.877 [2024-04-18 11:18:36.448178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.448232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.457806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f3e60 00:31:07.877 [2024-04-18 11:18:36.459138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.459172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.467945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fc560 00:31:07.877 [2024-04-18 11:18:36.468705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.468743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.482599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f96f8 00:31:07.877 [2024-04-18 11:18:36.484053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.484129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.494671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e4578 00:31:07.877 [2024-04-18 11:18:36.496252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.496299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:07.877 [2024-04-18 11:18:36.505952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e6b70 00:31:07.877 [2024-04-18 11:18:36.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.877 [2024-04-18 11:18:36.507938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.519449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f3a28 00:31:08.136 [2024-04-18 11:18:36.520437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.520477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.531432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ee5c8 00:31:08.136 [2024-04-18 11:18:36.532260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.532297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.545115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0350 00:31:08.136 [2024-04-18 11:18:36.546845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.546880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.554877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0350 00:31:08.136 [2024-04-18 11:18:36.555930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.555964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.567307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f1430 00:31:08.136 [2024-04-18 11:18:36.568452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.568485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.580322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190eb760 00:31:08.136 [2024-04-18 11:18:36.581324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.581359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.592195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6890 00:31:08.136 [2024-04-18 11:18:36.593006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.593090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.603857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ddc00 00:31:08.136 [2024-04-18 11:18:36.604513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.604552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.617823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f57b0 00:31:08.136 [2024-04-18 11:18:36.619347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.619383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.629614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ed4e8 00:31:08.136 [2024-04-18 11:18:36.630897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.630932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.640815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0350 00:31:08.136 [2024-04-18 11:18:36.641965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.641998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.654315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fb8b8 00:31:08.136 [2024-04-18 11:18:36.655942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.655993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.666649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e01f8 00:31:08.136 [2024-04-18 11:18:36.667896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.667937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.678461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fcdd0 00:31:08.136 [2024-04-18 11:18:36.679628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.679665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.689903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e6fa8 00:31:08.136 [2024-04-18 11:18:36.690970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.691007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.701736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f81e0 00:31:08.136 [2024-04-18 11:18:36.702592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.136 [2024-04-18 11:18:36.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:08.136 [2024-04-18 11:18:36.713977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e27f0 00:31:08.136 [2024-04-18 11:18:36.715026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.137 [2024-04-18 11:18:36.715101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:08.137 [2024-04-18 11:18:36.725759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e0a68 00:31:08.137 [2024-04-18 11:18:36.726621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.137 [2024-04-18 11:18:36.726670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:08.137 [2024-04-18 11:18:36.740657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ebfd0 00:31:08.137 [2024-04-18 11:18:36.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.137 [2024-04-18 11:18:36.742275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:08.137 [2024-04-18 11:18:36.750091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6020 00:31:08.137 [2024-04-18 11:18:36.750959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.137 [2024-04-18 11:18:36.750992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:08.137 [2024-04-18 11:18:36.764947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f1ca0 00:31:08.137 [2024-04-18 11:18:36.766843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.137 [2024-04-18 11:18:36.766879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:08.137 [2024-04-18 11:18:36.774635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f8618 00:31:08.137 [2024-04-18 11:18:36.775945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.137 [2024-04-18 11:18:36.775980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.786328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f9b30 00:31:08.395 [2024-04-18 11:18:36.787387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.787424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.800555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190de470 00:31:08.395 [2024-04-18 11:18:36.802230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.802280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.810910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0bc0 00:31:08.395 [2024-04-18 11:18:36.812800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.812835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.823684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f31b8 00:31:08.395 [2024-04-18 11:18:36.824795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.824829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.834487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190df118 00:31:08.395 [2024-04-18 11:18:36.835707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.835757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.847877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f31b8 00:31:08.395 [2024-04-18 11:18:36.849576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.849611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.857163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fef90 00:31:08.395 [2024-04-18 11:18:36.858060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.858121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.869382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e7818 00:31:08.395 [2024-04-18 11:18:36.870354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.870403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.881212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0788 00:31:08.395 [2024-04-18 11:18:36.882103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.882191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.894458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fef90 00:31:08.395 [2024-04-18 11:18:36.895879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.895916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.906901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f9f68 00:31:08.395 [2024-04-18 11:18:36.908390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.908424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.919062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f31b8 00:31:08.395 [2024-04-18 11:18:36.920471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.920540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.931347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190dfdc0 00:31:08.395 [2024-04-18 11:18:36.932117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.932157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:08.395 [2024-04-18 11:18:36.945140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fa3a0 00:31:08.395 [2024-04-18 11:18:36.946614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.395 [2024-04-18 11:18:36.946680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:08.396 [2024-04-18 11:18:36.958241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ec408 00:31:08.396 [2024-04-18 11:18:36.959862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.396 [2024-04-18 11:18:36.959898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:08.396 [2024-04-18 11:18:36.970915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ea680 00:31:08.396 [2024-04-18 11:18:36.972607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.396 [2024-04-18 11:18:36.972656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:08.396 [2024-04-18 11:18:36.982860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f35f0 00:31:08.396 [2024-04-18 11:18:36.984319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.396 [2024-04-18 11:18:36.984356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:08.396 [2024-04-18 11:18:36.997040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ff3c8 00:31:08.396 [2024-04-18 11:18:36.999028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.396 [2024-04-18 11:18:36.999087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:08.396 [2024-04-18 11:18:37.009790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ef6a8 00:31:08.396 [2024-04-18 11:18:37.011810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.396 [2024-04-18 11:18:37.011843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:08.396 [2024-04-18 11:18:37.021832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e0a68 00:31:08.396 [2024-04-18 11:18:37.023655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.396 [2024-04-18 11:18:37.023690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:08.396 [2024-04-18 11:18:37.033731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e38d0 00:31:08.396 [2024-04-18 11:18:37.035392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.396 [2024-04-18 11:18:37.035430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.045809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fd640 00:31:08.653 [2024-04-18 11:18:37.047154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.047215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.057935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fc128 00:31:08.653 [2024-04-18 11:18:37.059298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.059334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.069626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f31b8 00:31:08.653 [2024-04-18 11:18:37.070772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.070806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.084299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e01f8 00:31:08.653 [2024-04-18 11:18:37.086357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.086390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.093225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6cc8 00:31:08.653 [2024-04-18 11:18:37.094148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.094183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.108261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fe720 00:31:08.653 [2024-04-18 11:18:37.109898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.109958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.119858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ec408 00:31:08.653 [2024-04-18 11:18:37.121145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.121181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.132364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fbcf0 00:31:08.653 [2024-04-18 11:18:37.133642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.653 [2024-04-18 11:18:37.133678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:08.653 [2024-04-18 11:18:37.144145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f1868 00:31:08.654 [2024-04-18 11:18:37.145166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.145199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.155675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fd208 00:31:08.654 [2024-04-18 11:18:37.156543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.156608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.167877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ff3c8 00:31:08.654 [2024-04-18 11:18:37.168771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.168800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.180750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ed4e8 00:31:08.654 [2024-04-18 11:18:37.181695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.181729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.195290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6020 00:31:08.654 [2024-04-18 11:18:37.196788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.196824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.205763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0788 00:31:08.654 [2024-04-18 11:18:37.207550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.207591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.219090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fa7d8 00:31:08.654 [2024-04-18 11:18:37.220533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.220568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.228596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f7da8 00:31:08.654 [2024-04-18 11:18:37.229369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.229404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.242873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e6b70 00:31:08.654 [2024-04-18 11:18:37.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.244532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.253912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0788 00:31:08.654 [2024-04-18 11:18:37.255269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.255307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.264826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e99d8 00:31:08.654 [2024-04-18 11:18:37.265810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.265847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.276283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ed4e8 00:31:08.654 [2024-04-18 11:18:37.277253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.277289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:08.654 [2024-04-18 11:18:37.290825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e2c28 00:31:08.654 [2024-04-18 11:18:37.292504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.654 [2024-04-18 11:18:37.292544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.302374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fb480 00:31:08.912 [2024-04-18 11:18:37.303613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.303652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.316441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fd208 00:31:08.912 [2024-04-18 11:18:37.318402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.318454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.324899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e88f8 00:31:08.912 [2024-04-18 11:18:37.325896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.325931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.339147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f31b8 00:31:08.912 [2024-04-18 11:18:37.340654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.340689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.350479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f31b8 00:31:08.912 [2024-04-18 11:18:37.351815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.351849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.361890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e49b0 00:31:08.912 [2024-04-18 11:18:37.363090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.363125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.373162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190de038 00:31:08.912 [2024-04-18 11:18:37.374158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.374195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.385086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fa3a0 00:31:08.912 [2024-04-18 11:18:37.385765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.912 [2024-04-18 11:18:37.385802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:08.912 [2024-04-18 11:18:37.396667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6458 00:31:08.912 [2024-04-18 11:18:37.397565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.397600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.410331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fb048 00:31:08.913 [2024-04-18 11:18:37.411877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.411915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.421056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f4298 00:31:08.913 [2024-04-18 11:18:37.422391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.422429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.432721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fd640 00:31:08.913 [2024-04-18 11:18:37.433906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.433942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.446256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0bc0 00:31:08.913 [2024-04-18 11:18:37.447812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.447848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.455571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f2948 00:31:08.913 [2024-04-18 11:18:37.456454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.456489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.467733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fd208 00:31:08.913 [2024-04-18 11:18:37.468628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.468663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.479199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fc560 00:31:08.913 [2024-04-18 11:18:37.479947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.479982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.490904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e7c50 00:31:08.913 [2024-04-18 11:18:37.491670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.491701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.503014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f7da8 00:31:08.913 [2024-04-18 11:18:37.503764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.503799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.516682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e1710 00:31:08.913 [2024-04-18 11:18:37.517918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.528450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e4de8 00:31:08.913 [2024-04-18 11:18:37.529686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.529720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.542769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e5a90 00:31:08.913 [2024-04-18 11:18:37.544721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.544762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:08.913 [2024-04-18 11:18:37.551488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e5658 00:31:08.913 [2024-04-18 11:18:37.552454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.913 [2024-04-18 11:18:37.552492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.566116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f9f68 00:31:09.172 [2024-04-18 11:18:37.567749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.567789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.578193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e6738 00:31:09.172 [2024-04-18 11:18:37.579820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.579857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.587759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6458 00:31:09.172 [2024-04-18 11:18:37.588720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.588755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.599385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e5658 00:31:09.172 [2024-04-18 11:18:37.600345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.600379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.612831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e4de8 00:31:09.172 [2024-04-18 11:18:37.614125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.614159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.624627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f1868 00:31:09.172 [2024-04-18 11:18:37.626096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.626135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.635623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f7538 00:31:09.172 [2024-04-18 11:18:37.636614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.636650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.647950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f35f0 00:31:09.172 [2024-04-18 11:18:37.648921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.648957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.661812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ea680 00:31:09.172 [2024-04-18 11:18:37.663482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.663519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.672965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e3d08 00:31:09.172 [2024-04-18 11:18:37.674509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.674544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.684517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e95a0 00:31:09.172 [2024-04-18 11:18:37.685499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.685536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.695903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ec408 00:31:09.172 [2024-04-18 11:18:37.696776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.696812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.708284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190eea00 00:31:09.172 [2024-04-18 11:18:37.709262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.709297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.720148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190efae0 00:31:09.172 [2024-04-18 11:18:37.721477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.721512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.731469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f4f40 00:31:09.172 [2024-04-18 11:18:37.732781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.743599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ebfd0 00:31:09.172 [2024-04-18 11:18:37.744909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.744945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.754987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f35f0 00:31:09.172 [2024-04-18 11:18:37.756163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.756199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.768935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fef90 00:31:09.172 [2024-04-18 11:18:37.770743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.770780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.781111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190dfdc0 00:31:09.172 [2024-04-18 11:18:37.782880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.782915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.792440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190de8a8 00:31:09.172 [2024-04-18 11:18:37.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.794115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.172 [2024-04-18 11:18:37.803724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ee190 00:31:09.172 [2024-04-18 11:18:37.805191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.172 [2024-04-18 11:18:37.805227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.814972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f7970 00:31:09.431 [2024-04-18 11:18:37.816323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.816363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.827213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f57b0 00:31:09.431 [2024-04-18 11:18:37.828187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.828227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.839127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ef270 00:31:09.431 [2024-04-18 11:18:37.840428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.840465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.850515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190eee38 00:31:09.431 [2024-04-18 11:18:37.851828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.851865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.862669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fb480 00:31:09.431 [2024-04-18 11:18:37.863983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.864018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.874228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f6458 00:31:09.431 [2024-04-18 11:18:37.875398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.875434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.888244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190eff18 00:31:09.431 [2024-04-18 11:18:37.890045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.890092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.896814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fe2e8 00:31:09.431 [2024-04-18 11:18:37.897634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.897669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.909452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f1430 00:31:09.431 [2024-04-18 11:18:37.910413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.910447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.923787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fa7d8 00:31:09.431 [2024-04-18 11:18:37.925464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.925501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.934822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190df988 00:31:09.431 [2024-04-18 11:18:37.936084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.936122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.946103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fc128 00:31:09.431 [2024-04-18 11:18:37.947904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.947942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.959375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e5a90 00:31:09.431 [2024-04-18 11:18:37.960853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.960888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.970683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e2c28 00:31:09.431 [2024-04-18 11:18:37.972017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.972060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.982052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190e9e10 00:31:09.431 [2024-04-18 11:18:37.983217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.983251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:37.993339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f1430 00:31:09.431 [2024-04-18 11:18:37.994333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:37.994367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:38.004736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190f0ff8 00:31:09.431 [2024-04-18 11:18:38.005608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:38.005644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:38.019198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190eee38 00:31:09.431 [2024-04-18 11:18:38.020767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:38.020809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:38.030655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190eee38 00:31:09.431 [2024-04-18 11:18:38.032005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.431 [2024-04-18 11:18:38.032047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.431 [2024-04-18 11:18:38.041758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190fd640 00:31:09.432 [2024-04-18 11:18:38.042847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.432 [2024-04-18 11:18:38.042886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.432 [2024-04-18 11:18:38.054398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190df988 00:31:09.432 [2024-04-18 11:18:38.055371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.432 [2024-04-18 11:18:38.055414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.432 [2024-04-18 11:18:38.067750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190dfdc0 00:31:09.432 [2024-04-18 11:18:38.069276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.432 [2024-04-18 11:18:38.069320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.690 [2024-04-18 11:18:38.080315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7a7a0) with pdu=0x2000190ef6a8 00:31:09.690 [2024-04-18 11:18:38.081312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.690 [2024-04-18 11:18:38.081355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.690 00:31:09.690 Latency(us) 00:31:09.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.690 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.690 nvme0n1 : 2.00 20944.92 81.82 0.00 0.00 6101.78 2502.28 16324.42 00:31:09.690 =================================================================================================================== 00:31:09.690 Total : 20944.92 81.82 0.00 0.00 6101.78 2502.28 16324.42 00:31:09.690 0 00:31:09.690 11:18:38 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:09.690 11:18:38 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:09.690 11:18:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:09.690 11:18:38 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:09.690 | .driver_specific 00:31:09.690 | .nvme_error 00:31:09.690 | .status_code 00:31:09.690 | .command_transient_transport_error' 00:31:09.949 11:18:38 -- host/digest.sh@71 -- # (( 164 > 0 )) 00:31:09.949 11:18:38 -- host/digest.sh@73 -- # killprocess 105125 00:31:09.949 11:18:38 -- common/autotest_common.sh@936 -- # '[' -z 105125 ']' 00:31:09.949 11:18:38 -- common/autotest_common.sh@940 -- # kill -0 105125 00:31:09.949 11:18:38 -- common/autotest_common.sh@941 -- # uname 00:31:09.949 11:18:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:09.949 11:18:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105125 00:31:09.949 killing process with pid 105125 00:31:09.949 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.949 00:31:09.949 Latency(us) 00:31:09.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.949 =================================================================================================================== 00:31:09.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.949 11:18:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:09.949 11:18:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:09.949 11:18:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105125' 00:31:09.949 11:18:38 -- common/autotest_common.sh@955 -- # kill 105125 00:31:09.949 11:18:38 -- common/autotest_common.sh@960 -- # wait 105125 00:31:10.207 11:18:38 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:10.207 11:18:38 -- host/digest.sh@54 -- # local rw bs qd 00:31:10.207 11:18:38 -- host/digest.sh@56 -- # rw=randwrite 00:31:10.207 11:18:38 -- host/digest.sh@56 -- # bs=131072 00:31:10.207 11:18:38 -- host/digest.sh@56 -- # qd=16 00:31:10.207 11:18:38 -- host/digest.sh@58 -- # bperfpid=105203 00:31:10.207 11:18:38 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:10.207 11:18:38 -- host/digest.sh@60 -- # waitforlisten 105203 /var/tmp/bperf.sock 00:31:10.207 11:18:38 -- common/autotest_common.sh@817 -- # '[' -z 105203 ']' 00:31:10.207 11:18:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:10.207 11:18:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:10.207 11:18:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:10.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:10.207 11:18:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:10.207 11:18:38 -- common/autotest_common.sh@10 -- # set +x 00:31:10.207 [2024-04-18 11:18:38.708148] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:10.207 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:10.207 Zero copy mechanism will not be used. 00:31:10.207 [2024-04-18 11:18:38.710127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105203 ] 00:31:10.464 [2024-04-18 11:18:38.851021] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.464 [2024-04-18 11:18:38.953271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.398 11:18:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:11.398 11:18:39 -- common/autotest_common.sh@850 -- # return 0 00:31:11.398 11:18:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:11.398 11:18:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:11.398 11:18:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:11.398 11:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.398 11:18:39 -- common/autotest_common.sh@10 -- # set +x 00:31:11.398 11:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.398 11:18:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:11.398 11:18:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:11.656 nvme0n1 00:31:11.915 11:18:40 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:11.915 11:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.915 11:18:40 -- common/autotest_common.sh@10 -- # set +x 00:31:11.915 11:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.915 11:18:40 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:11.915 11:18:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:11.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:11.915 Zero copy mechanism will not be used. 00:31:11.915 Running I/O for 2 seconds... 00:31:11.915 [2024-04-18 11:18:40.467290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.467622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.467663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.472831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.473151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.473193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.478204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.478507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.478552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.483656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.483959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.483992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.488979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.489326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.489383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.494420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.494723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.494758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.499833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.500146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.500178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.505238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.505539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.505569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.510538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.510837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.510869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.516195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.516498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.516538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.521604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.521903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.521935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.526930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.527253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.527285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.532381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.532682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.532717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.537748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.538061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.538095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.543117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.548496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.548796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.548828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:11.915 [2024-04-18 11:18:40.553918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:11.915 [2024-04-18 11:18:40.554222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.915 [2024-04-18 11:18:40.554261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.559327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.559620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.559659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.564789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.565100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.565135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.570196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.570496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.570531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.575804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.576116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.576146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.581209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.581509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.581544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.586557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.586863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.586897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.591964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.592262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.592304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.597289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.597590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.597626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.602645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.174 [2024-04-18 11:18:40.602945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.174 [2024-04-18 11:18:40.602977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.174 [2024-04-18 11:18:40.607962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.608277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.608315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.613339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.613639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.613670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.618694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.619061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.619101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.624257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.624558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.624593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.629703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.630005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.630058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.635171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.635479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.635514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.640613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.640914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.640946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.646006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.646321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.646357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.651406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.651705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.651741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.656765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.657077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.657113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.662198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.662497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.662532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.667612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.667921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.667954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.673005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.673319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.673354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.678457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.678756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.678793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.683895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.684229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.684274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.689761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.690048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.690092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.695580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.695896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.695928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.701079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.701390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.701422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.706438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.706737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.706773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.711895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.712205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.712235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.717487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.717786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.717820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.723092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.723433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.723469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.728539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.728847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.728883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.733920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.734245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.734283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.739356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.739646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.739679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.744712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.745018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.745073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.750131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.750443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.175 [2024-04-18 11:18:40.750483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.175 [2024-04-18 11:18:40.755506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.175 [2024-04-18 11:18:40.755790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.755828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.760930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.761252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.761290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.766336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.766636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.766672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.771724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.772009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.772056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.777116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.777417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.777453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.782485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.782783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.782814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.787888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.788201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.788232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.793323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.793625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.793657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.798716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.799000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.799043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.804184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.804484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.804517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.176 [2024-04-18 11:18:40.809492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.176 [2024-04-18 11:18:40.809777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.176 [2024-04-18 11:18:40.809808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.435 [2024-04-18 11:18:40.814846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.435 [2024-04-18 11:18:40.815162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.435 [2024-04-18 11:18:40.815209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.435 [2024-04-18 11:18:40.820269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.435 [2024-04-18 11:18:40.820571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.435 [2024-04-18 11:18:40.820605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.435 [2024-04-18 11:18:40.825731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.435 [2024-04-18 11:18:40.826032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.435 [2024-04-18 11:18:40.826081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.435 [2024-04-18 11:18:40.831117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.435 [2024-04-18 11:18:40.831459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.435 [2024-04-18 11:18:40.831512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.435 [2024-04-18 11:18:40.836916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.435 [2024-04-18 11:18:40.837213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.837247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.842085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.842421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.842477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.847014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.847344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.847403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.851735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.851945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.851980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.856435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.856695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.856742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.861020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.861270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.861317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.865681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.865897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.865933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.870365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.870602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.870651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.875063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.875307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.875351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.879699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.879919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.879963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.884452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.884679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.884728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.889163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.889486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.889545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.893795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.894010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.894080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.898475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.898676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.898723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.903125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.903335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.903381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.907771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.907939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.907964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.912582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.912765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.912794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.917370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.917533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.917554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.922065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.922229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.922251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.926804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.926990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.927011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.931608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.931813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.931834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.936390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.936565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.936587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.941208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.941374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.941395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.945937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.946114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.946137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.950745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.950932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.950954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.955519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.955683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.955704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.960269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.960450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.436 [2024-04-18 11:18:40.960472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.436 [2024-04-18 11:18:40.965075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.436 [2024-04-18 11:18:40.965264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.965285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:40.969870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:40.970048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.970070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:40.974727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:40.974895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.974917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:40.979456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:40.979629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.979651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:40.984268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:40.984433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.984454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:40.988968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:40.989168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.989191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:40.993768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:40.993932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.993953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:40.998530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:40.998694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:40.998715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.003302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.003469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.003491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.008147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.008319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.008340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.012916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.013097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.013119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.017776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.017943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.017974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.022542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.022728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.022758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.027329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.027505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.027526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.032115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.032298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.032319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.036922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.037119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.041696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.041859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.041880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.046468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.046653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.046674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.051278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.051470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.051492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.056122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.056294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.056315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.060873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.061062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.061084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.065627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.065813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.065835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.437 [2024-04-18 11:18:41.070358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.437 [2024-04-18 11:18:41.070537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.437 [2024-04-18 11:18:41.070558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.075131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.075305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.075327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.079863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.080029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.080063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.084660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.084836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.084857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.089448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.089615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.089636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.094205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.094370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.094391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.099010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.099227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.099249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.103875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.104054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.104076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.108704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.108870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.108904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.113536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.113700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.113722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.118302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.118480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.118502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.123066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.123244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.123266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.127861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.128055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.128077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.132680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.132881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.132902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.137495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.137681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.137702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.142429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.142593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.142614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.147276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.147456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.147478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.152151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.152338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.152359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.157016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.157203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.157224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.161732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.161895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.161916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.166565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.166742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.166763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.171398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.171565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.171587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.176195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.697 [2024-04-18 11:18:41.176361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.697 [2024-04-18 11:18:41.176383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.697 [2024-04-18 11:18:41.181079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.181275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.181302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.185810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.185974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.185996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.190565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.190752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.190774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.195482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.195646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.195668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.200434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.200611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.200633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.205302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.205467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.205488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.210066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.210247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.210268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.214783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.214962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.214983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.219586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.219751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.219772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.224311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.224480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.224502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.229115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.229279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.229300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.233868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.234045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.234067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.238563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.238738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.238760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.243340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.243535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.243556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.248163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.248329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.248350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.253160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.253370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.253392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.258012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.258188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.258210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.262820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.262986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.263007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.267647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.267833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.267854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.272422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.272586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.272607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.277239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.277404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.277425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.281996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.282187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.282209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.286916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.287110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.287133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.291701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.291875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.698 [2024-04-18 11:18:41.291897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.698 [2024-04-18 11:18:41.296480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.698 [2024-04-18 11:18:41.296643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.296664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.301242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.301407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.301439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.306053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.306216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.306242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.310760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.310953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.310975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.315623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.315787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.315808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.320408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.320574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.320597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.325197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.325362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.325391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.330003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.330193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.330226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.699 [2024-04-18 11:18:41.334818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.699 [2024-04-18 11:18:41.334984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.699 [2024-04-18 11:18:41.335015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.958 [2024-04-18 11:18:41.339664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.958 [2024-04-18 11:18:41.339839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.958 [2024-04-18 11:18:41.339869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.958 [2024-04-18 11:18:41.344402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.958 [2024-04-18 11:18:41.344568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.958 [2024-04-18 11:18:41.344604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.958 [2024-04-18 11:18:41.349177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.958 [2024-04-18 11:18:41.349367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.958 [2024-04-18 11:18:41.349398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.958 [2024-04-18 11:18:41.353973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.958 [2024-04-18 11:18:41.354155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.958 [2024-04-18 11:18:41.354187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.958 [2024-04-18 11:18:41.358862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.359043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.359078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.363717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.363895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.363926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.368574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.368740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.368771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.373363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.373533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.378167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.378332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.378353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.382968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.383145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.383167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.387805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.387991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.392615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.392777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.392807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.397437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.397622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.397645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.402258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.402421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.402441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.407089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.407290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.412062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.412225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.412247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.416840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.417015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.417037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.421700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.421876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.421897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.426490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.426652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.426673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.431492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.431667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.431688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.436361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.436534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.436555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.441177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.441374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.445900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.446080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.446101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.450637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.450811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.450832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.455426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.455607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.455628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.460211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.460392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.460413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.465046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.465210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.465231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.469791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.469959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.469979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.474667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.474854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.474875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.479552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.479718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.479740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.484314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.484477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.484498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.489053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.489274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.489310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.494001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.494180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.494202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.498729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.498906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.498928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.503509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.503685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.503707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.508359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.508542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.508563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.513117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.513307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.513328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.517955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.518131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.518153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.522723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.522887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.522908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.527550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.527727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.527748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.532375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.532538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.532559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.537130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.537294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.537326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.541996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.542175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.542206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.546808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.546972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.547004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.551645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.551808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.551850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.556416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.556582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.556608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.561251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.561426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.561459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.566021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.566221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.566253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.570859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.571058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.571091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.575670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.575841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.575872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.580485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.580664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.580696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.585302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.585485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.585517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.590091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.590268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.590290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:12.959 [2024-04-18 11:18:41.594875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:12.959 [2024-04-18 11:18:41.595069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.959 [2024-04-18 11:18:41.595093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.599642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.599820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.599842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.604457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.604634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.604665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.609322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.609494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.609515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.614137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.614302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.614331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.618862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.619026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.623694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.623868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.623890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.628497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.628686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.628717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.633396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.633574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.633595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.638162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.638327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.638348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.642935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.643111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.643133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.647788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.647964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.647986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.652585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.652761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.652782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.657475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.657689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.657713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.662306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.662472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.662493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.667141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.667325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.219 [2024-04-18 11:18:41.667346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.219 [2024-04-18 11:18:41.671921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.219 [2024-04-18 11:18:41.672097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.672120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.676831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.677009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.677045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.681603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.681767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.681789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.686463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.686628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.686649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.691321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.691503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.691524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.696214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.696382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.696404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.701200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.701366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.701388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.706096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.706263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.706284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.710902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.711080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.711103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.715779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.715961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.715983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.720600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.720778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.720800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.725423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.725617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.725640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.730256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.730446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.730467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.735102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.735313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.735335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.739932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.740108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.740130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.744647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.744823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.744844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.749572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.749762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.749784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.754441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.754617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.754638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.759213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.759394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.759416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.764272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.764436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.764457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.769280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.769489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.769511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.774427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.774590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.774611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.779548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.779744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.779766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.784463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.784682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.784703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.789614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.789778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.789799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.794380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.794559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.794579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.799309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.799474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.799494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.220 [2024-04-18 11:18:41.804185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.220 [2024-04-18 11:18:41.804372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.220 [2024-04-18 11:18:41.804393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.809255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.809419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.809441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.814146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.814309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.814330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.818934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.819126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.819147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.824307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.824531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.824551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.829531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.829706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.829728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.834810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.834974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.834996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.839990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.840208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.840230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.845335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.845542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.850670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.850912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.221 [2024-04-18 11:18:41.855810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.221 [2024-04-18 11:18:41.855975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.221 [2024-04-18 11:18:41.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.861161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.861416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.861463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.866270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.866486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.866507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.871604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.871829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.871851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.876776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.877004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.877044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.882058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.882260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.882284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.886874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.887135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.887166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.892172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.892385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.892406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.897392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.897595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.897618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.902729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.902935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.902957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.907933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.908123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.908145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.912851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.913046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.913068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.917946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.918142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.918163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.923052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.923311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.923333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.928334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.928501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.928522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.933347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.933561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.933581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.938341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.938563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.938584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.943453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.943639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.943660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.948548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.948755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.948775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.953584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.953790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.953815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.958776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.958978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.481 [2024-04-18 11:18:41.958999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.481 [2024-04-18 11:18:41.963764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.481 [2024-04-18 11:18:41.963949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.963970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:41.968526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:41.968691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.968714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:41.973272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:41.973448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.973470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:41.978096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:41.978284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.978306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:41.983041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:41.983258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.983280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:41.987934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:41.988110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.988132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:41.992726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:41.992905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.992928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:41.997840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:41.998006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:41.998027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.002844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.003017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.003059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.008126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.008365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.008386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.013254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.013419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.013440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.018075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.018256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.018276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.022903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.023083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.023104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.027920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.028122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.028144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.032958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.033159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.033182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.038212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.038388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.038408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.043123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.043298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.043320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.048048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.048240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.048261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.053132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.053326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.053369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.058372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.058564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.058595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.063348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.063514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.063544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.068413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.068612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.068634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.073536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.073711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.073733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.078621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.078807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.078830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.083781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.083944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.083968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.088671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.088854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.088877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.093864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.094028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.482 [2024-04-18 11:18:42.094050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.482 [2024-04-18 11:18:42.099173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.482 [2024-04-18 11:18:42.099377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.483 [2024-04-18 11:18:42.099398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.483 [2024-04-18 11:18:42.104302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.483 [2024-04-18 11:18:42.104496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.483 [2024-04-18 11:18:42.104518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.483 [2024-04-18 11:18:42.109297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.483 [2024-04-18 11:18:42.109488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.483 [2024-04-18 11:18:42.109511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.483 [2024-04-18 11:18:42.114580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.483 [2024-04-18 11:18:42.114803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.483 [2024-04-18 11:18:42.114824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.483 [2024-04-18 11:18:42.119877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.483 [2024-04-18 11:18:42.120042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.483 [2024-04-18 11:18:42.120064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.743 [2024-04-18 11:18:42.124844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.743 [2024-04-18 11:18:42.125018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.743 [2024-04-18 11:18:42.125059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.743 [2024-04-18 11:18:42.129795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.743 [2024-04-18 11:18:42.129959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.743 [2024-04-18 11:18:42.129980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.134781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.134970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.134992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.139855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.140031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.140053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.144994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.145207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.145253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.150073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.150286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.150307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.155116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.155359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.155386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.160096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.160279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.160300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.164898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.165096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.165118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.169853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.170017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.170038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.174675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.174839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.174860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.179859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.180035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.185245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.185440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.185461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.190254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.190458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.190479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.195203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.195406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.195427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.200389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.200572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.200594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.205350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.205547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.205568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.210484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.210689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.210710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.215482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.215658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.215679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.220263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.220440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.220464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.225022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.225198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.225220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.229754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.229921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.229942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.234514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.234683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.234705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.239342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.239530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.239560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.244210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.244392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.244413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.248964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.249142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.249164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.253741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.253929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.253950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.258694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.258873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.258894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.263703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.744 [2024-04-18 11:18:42.263879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.744 [2024-04-18 11:18:42.263900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.744 [2024-04-18 11:18:42.268659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.268842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.268863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.273449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.273635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.273656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.278161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.278329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.278350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.283130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.283319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.283350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.288294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.288475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.288496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.293366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.293531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.293553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.298582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.298764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.298786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.303924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.304141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.304174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.309214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.309423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.309459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.314282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.314474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.314494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.319451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.319656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.319688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.324519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.324706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.324738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.329311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.329485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.329508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.334079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.334255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.334289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.338920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.339107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.339140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.344010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.344235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.344273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.349151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.349347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.349377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.354347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.354513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.354545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.359390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.359555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.359589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.364492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.364727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.364771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.369590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.369779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.369811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.374797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.374969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.375021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:13.745 [2024-04-18 11:18:42.380352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:13.745 [2024-04-18 11:18:42.380518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.745 [2024-04-18 11:18:42.380539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.004 [2024-04-18 11:18:42.385695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.004 [2024-04-18 11:18:42.385859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.004 [2024-04-18 11:18:42.385881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.004 [2024-04-18 11:18:42.390861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.004 [2024-04-18 11:18:42.391082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.004 [2024-04-18 11:18:42.391103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.004 [2024-04-18 11:18:42.396071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.004 [2024-04-18 11:18:42.396295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.004 [2024-04-18 11:18:42.396316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.004 [2024-04-18 11:18:42.401243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.004 [2024-04-18 11:18:42.401447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.004 [2024-04-18 11:18:42.401468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.004 [2024-04-18 11:18:42.406299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.004 [2024-04-18 11:18:42.406507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.004 [2024-04-18 11:18:42.406529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.004 [2024-04-18 11:18:42.411208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.004 [2024-04-18 11:18:42.411397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.004 [2024-04-18 11:18:42.411420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.004 [2024-04-18 11:18:42.415967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.004 [2024-04-18 11:18:42.416170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.416206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.420794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.420959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.420990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.425548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.425712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.425742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.430361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.430544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.430574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.435118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.435319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.435349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.439912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.440097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.440127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.444690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.444864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.444896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.449465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.449642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.449671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.454297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.454467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.454497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:14.005 [2024-04-18 11:18:42.459014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc7aae0) with pdu=0x2000190fef90 00:31:14.005 [2024-04-18 11:18:42.459201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.005 [2024-04-18 11:18:42.459238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:14.005 00:31:14.005 Latency(us) 00:31:14.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.005 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:14.005 nvme0n1 : 2.00 6206.38 775.80 0.00 0.00 2572.40 1578.82 5928.03 00:31:14.005 =================================================================================================================== 00:31:14.005 Total : 6206.38 775.80 0.00 0.00 2572.40 1578.82 5928.03 00:31:14.005 0 00:31:14.005 11:18:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:14.005 11:18:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:14.005 11:18:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:14.005 | .driver_specific 00:31:14.005 | .nvme_error 00:31:14.005 | .status_code 00:31:14.005 | .command_transient_transport_error' 00:31:14.005 11:18:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:14.264 11:18:42 -- host/digest.sh@71 -- # (( 400 > 0 )) 00:31:14.264 11:18:42 -- host/digest.sh@73 -- # killprocess 105203 00:31:14.264 11:18:42 -- common/autotest_common.sh@936 -- # '[' -z 105203 ']' 00:31:14.264 11:18:42 -- common/autotest_common.sh@940 -- # kill -0 105203 00:31:14.264 11:18:42 -- common/autotest_common.sh@941 -- # uname 00:31:14.264 11:18:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:14.264 11:18:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105203 00:31:14.264 11:18:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:14.264 11:18:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:14.264 11:18:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105203' 00:31:14.264 killing process with pid 105203 00:31:14.264 11:18:42 -- common/autotest_common.sh@955 -- # kill 105203 00:31:14.264 Received shutdown signal, test time was about 2.000000 seconds 00:31:14.264 00:31:14.264 Latency(us) 00:31:14.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.264 =================================================================================================================== 00:31:14.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:14.264 11:18:42 -- common/autotest_common.sh@960 -- # wait 105203 00:31:14.523 11:18:43 -- host/digest.sh@116 -- # killprocess 104918 00:31:14.523 11:18:43 -- common/autotest_common.sh@936 -- # '[' -z 104918 ']' 00:31:14.523 11:18:43 -- common/autotest_common.sh@940 -- # kill -0 104918 00:31:14.523 11:18:43 -- common/autotest_common.sh@941 -- # uname 00:31:14.523 11:18:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:14.523 11:18:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104918 00:31:14.523 11:18:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:14.523 killing process with pid 104918 00:31:14.523 11:18:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:14.523 11:18:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104918' 00:31:14.523 11:18:43 -- common/autotest_common.sh@955 -- # kill 104918 00:31:14.523 11:18:43 -- common/autotest_common.sh@960 -- # wait 104918 00:31:14.781 00:31:14.781 real 0m17.332s 00:31:14.781 user 0m32.712s 00:31:14.781 sys 0m4.585s 00:31:14.781 11:18:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:14.781 11:18:43 -- common/autotest_common.sh@10 -- # set +x 00:31:14.781 ************************************ 00:31:14.781 END TEST nvmf_digest_error 00:31:14.781 ************************************ 00:31:14.781 11:18:43 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:14.781 11:18:43 -- host/digest.sh@150 -- # nvmftestfini 00:31:14.781 11:18:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:14.781 11:18:43 -- nvmf/common.sh@117 -- # sync 00:31:14.781 11:18:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.781 11:18:43 -- nvmf/common.sh@120 -- # set +e 00:31:14.781 11:18:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.781 11:18:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.781 rmmod nvme_tcp 00:31:14.781 rmmod nvme_fabrics 00:31:14.781 rmmod nvme_keyring 00:31:15.039 11:18:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:15.039 11:18:43 -- nvmf/common.sh@124 -- # set -e 00:31:15.039 11:18:43 -- nvmf/common.sh@125 -- # return 0 00:31:15.039 11:18:43 -- nvmf/common.sh@478 -- # '[' -n 104918 ']' 00:31:15.039 11:18:43 -- nvmf/common.sh@479 -- # killprocess 104918 00:31:15.039 11:18:43 -- common/autotest_common.sh@936 -- # '[' -z 104918 ']' 00:31:15.039 11:18:43 -- common/autotest_common.sh@940 -- # kill -0 104918 00:31:15.039 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (104918) - No such process 00:31:15.039 Process with pid 104918 is not found 00:31:15.039 11:18:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 104918 is not found' 00:31:15.039 11:18:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:15.039 11:18:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:15.039 11:18:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:15.039 11:18:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:15.039 11:18:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:15.039 11:18:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.039 11:18:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.039 11:18:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.039 11:18:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:15.039 00:31:15.039 real 0m37.074s 00:31:15.039 user 1m9.165s 00:31:15.039 sys 0m9.549s 00:31:15.039 11:18:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:15.039 11:18:43 -- common/autotest_common.sh@10 -- # set +x 00:31:15.039 ************************************ 00:31:15.039 END TEST nvmf_digest 00:31:15.039 ************************************ 00:31:15.039 11:18:43 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:31:15.039 11:18:43 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:31:15.039 11:18:43 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:31:15.039 11:18:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:15.039 11:18:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:15.039 11:18:43 -- common/autotest_common.sh@10 -- # set +x 00:31:15.039 ************************************ 00:31:15.039 START TEST nvmf_mdns_discovery 00:31:15.039 ************************************ 00:31:15.039 11:18:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:31:15.297 * Looking for test storage... 00:31:15.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:15.297 11:18:43 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:15.297 11:18:43 -- nvmf/common.sh@7 -- # uname -s 00:31:15.297 11:18:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.297 11:18:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.297 11:18:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.297 11:18:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.297 11:18:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.297 11:18:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.297 11:18:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.297 11:18:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.297 11:18:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.297 11:18:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.297 11:18:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:31:15.297 11:18:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:31:15.297 11:18:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.297 11:18:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.297 11:18:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:15.297 11:18:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.297 11:18:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:15.297 11:18:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.297 11:18:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.297 11:18:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.297 11:18:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.298 11:18:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.298 11:18:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.298 11:18:43 -- paths/export.sh@5 -- # export PATH 00:31:15.298 11:18:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.298 11:18:43 -- nvmf/common.sh@47 -- # : 0 00:31:15.298 11:18:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:15.298 11:18:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:15.298 11:18:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.298 11:18:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.298 11:18:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.298 11:18:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:15.298 11:18:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:15.298 11:18:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:31:15.298 11:18:43 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:31:15.298 11:18:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:15.298 11:18:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.298 11:18:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:15.298 11:18:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:15.298 11:18:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:15.298 11:18:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.298 11:18:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.298 11:18:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.298 11:18:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:15.298 11:18:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:15.298 11:18:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:15.298 11:18:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:15.298 11:18:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:15.298 11:18:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:15.298 11:18:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.298 11:18:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.298 11:18:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:15.298 11:18:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:15.298 11:18:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:15.298 11:18:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:15.298 11:18:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:15.298 11:18:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.298 11:18:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:15.298 11:18:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:15.298 11:18:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:15.298 11:18:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:15.298 11:18:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:15.298 11:18:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:15.298 Cannot find device "nvmf_tgt_br" 00:31:15.298 11:18:43 -- nvmf/common.sh@155 -- # true 00:31:15.298 11:18:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:15.298 Cannot find device "nvmf_tgt_br2" 00:31:15.298 11:18:43 -- nvmf/common.sh@156 -- # true 00:31:15.298 11:18:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:15.298 11:18:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:15.298 Cannot find device "nvmf_tgt_br" 00:31:15.298 11:18:43 -- nvmf/common.sh@158 -- # true 00:31:15.298 11:18:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:15.298 Cannot find device "nvmf_tgt_br2" 00:31:15.298 11:18:43 -- nvmf/common.sh@159 -- # true 00:31:15.298 11:18:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:15.298 11:18:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:15.298 11:18:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:15.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:15.298 11:18:43 -- nvmf/common.sh@162 -- # true 00:31:15.298 11:18:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:15.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:15.298 11:18:43 -- nvmf/common.sh@163 -- # true 00:31:15.298 11:18:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:15.298 11:18:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:15.298 11:18:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:15.298 11:18:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:15.298 11:18:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:15.298 11:18:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:15.298 11:18:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:15.298 11:18:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:15.555 11:18:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:15.555 11:18:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:15.555 11:18:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:15.555 11:18:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:15.555 11:18:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:15.555 11:18:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:15.555 11:18:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:15.555 11:18:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:15.555 11:18:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:15.555 11:18:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:15.555 11:18:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:15.555 11:18:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:15.556 11:18:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:15.556 11:18:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:15.556 11:18:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:15.556 11:18:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:15.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:31:15.556 00:31:15.556 --- 10.0.0.2 ping statistics --- 00:31:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.556 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:31:15.556 11:18:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:15.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:15.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:31:15.556 00:31:15.556 --- 10.0.0.3 ping statistics --- 00:31:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.556 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:31:15.556 11:18:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:15.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:31:15.556 00:31:15.556 --- 10.0.0.1 ping statistics --- 00:31:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.556 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:31:15.556 11:18:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.556 11:18:44 -- nvmf/common.sh@422 -- # return 0 00:31:15.556 11:18:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:15.556 11:18:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.556 11:18:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:15.556 11:18:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:15.556 11:18:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.556 11:18:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:15.556 11:18:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:15.556 11:18:44 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:31:15.556 11:18:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:15.556 11:18:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:15.556 11:18:44 -- common/autotest_common.sh@10 -- # set +x 00:31:15.556 11:18:44 -- nvmf/common.sh@470 -- # nvmfpid=105502 00:31:15.556 11:18:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:31:15.556 11:18:44 -- nvmf/common.sh@471 -- # waitforlisten 105502 00:31:15.556 11:18:44 -- common/autotest_common.sh@817 -- # '[' -z 105502 ']' 00:31:15.556 11:18:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.556 11:18:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:15.556 11:18:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.556 11:18:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:15.556 11:18:44 -- common/autotest_common.sh@10 -- # set +x 00:31:15.556 [2024-04-18 11:18:44.128275] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:15.556 [2024-04-18 11:18:44.128595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.813 [2024-04-18 11:18:44.271988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.813 [2024-04-18 11:18:44.370387] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.813 [2024-04-18 11:18:44.370609] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.813 [2024-04-18 11:18:44.370644] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.813 [2024-04-18 11:18:44.370655] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.813 [2024-04-18 11:18:44.370664] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.813 [2024-04-18 11:18:44.370699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.783 11:18:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:16.783 11:18:45 -- common/autotest_common.sh@850 -- # return 0 00:31:16.783 11:18:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:16.783 11:18:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 11:18:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.783 11:18:45 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:31:16.783 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.783 11:18:45 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:31:16.783 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.783 11:18:45 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.783 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 [2024-04-18 11:18:45.293481] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.783 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.783 11:18:45 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:16.783 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 [2024-04-18 11:18:45.301632] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:16.783 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.783 11:18:45 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:16.783 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 null0 00:31:16.783 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.783 11:18:45 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:16.783 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 null1 00:31:16.783 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.783 11:18:45 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:31:16.783 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.783 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 null2 00:31:16.783 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.784 11:18:45 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:31:16.784 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.784 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.784 null3 00:31:16.784 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.784 11:18:45 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:31:16.784 11:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.784 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.784 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:16.784 11:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.784 11:18:45 -- host/mdns_discovery.sh@47 -- # hostpid=105552 00:31:16.784 11:18:45 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:16.784 11:18:45 -- host/mdns_discovery.sh@48 -- # waitforlisten 105552 /tmp/host.sock 00:31:16.784 11:18:45 -- common/autotest_common.sh@817 -- # '[' -z 105552 ']' 00:31:16.784 11:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:31:16.784 11:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:16.784 11:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:16.784 11:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:16.784 11:18:45 -- common/autotest_common.sh@10 -- # set +x 00:31:16.784 [2024-04-18 11:18:45.403679] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:16.784 [2024-04-18 11:18:45.403964] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105552 ] 00:31:17.042 [2024-04-18 11:18:45.545905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.042 [2024-04-18 11:18:45.642465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.976 11:18:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:17.976 11:18:46 -- common/autotest_common.sh@850 -- # return 0 00:31:17.976 11:18:46 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:31:17.976 11:18:46 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:31:17.976 11:18:46 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:31:17.976 11:18:46 -- host/mdns_discovery.sh@57 -- # avahipid=105586 00:31:17.976 11:18:46 -- host/mdns_discovery.sh@58 -- # sleep 1 00:31:17.976 11:18:46 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:31:17.976 11:18:46 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:31:17.976 Process 1004 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:31:17.976 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:31:17.976 Successfully dropped root privileges. 00:31:17.976 avahi-daemon 0.8 starting up. 00:31:17.976 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:31:18.908 Successfully called chroot(). 00:31:18.908 Successfully dropped remaining capabilities. 00:31:18.908 No service file found in /etc/avahi/services. 00:31:18.908 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:31:18.908 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:31:18.908 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:31:18.908 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:31:18.909 Network interface enumeration completed. 00:31:18.909 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:31:18.909 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:31:18.909 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:31:18.909 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:31:18.909 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3224714568. 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:18.909 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.909 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:18.909 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:18.909 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.909 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:18.909 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:18.909 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:31:18.909 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@68 -- # sort 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@68 -- # xargs 00:31:18.909 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@64 -- # sort 00:31:18.909 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.909 11:18:47 -- host/mdns_discovery.sh@64 -- # xargs 00:31:18.909 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.166 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:19.166 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.166 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.166 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:31:19.166 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # sort 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # xargs 00:31:19.166 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.166 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # xargs 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # sort 00:31:19.166 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.166 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.166 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:19.166 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.166 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.166 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:19.166 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.166 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # sort 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@68 -- # xargs 00:31:19.166 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:31:19.166 [2024-04-18 11:18:47.783266] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:19.166 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.166 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # sort 00:31:19.166 11:18:47 -- host/mdns_discovery.sh@64 -- # xargs 00:31:19.166 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.455 11:18:47 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:31:19.455 11:18:47 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:19.455 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.455 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.455 [2024-04-18 11:18:47.838259] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.455 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.455 11:18:47 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:19.455 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.455 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.455 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.455 11:18:47 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:31:19.455 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.455 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.455 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.455 11:18:47 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:31:19.455 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.455 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.455 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.455 11:18:47 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:31:19.456 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.456 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.456 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.456 11:18:47 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:31:19.456 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.456 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.456 [2024-04-18 11:18:47.878229] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:31:19.456 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.456 11:18:47 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:31:19.456 11:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.456 11:18:47 -- common/autotest_common.sh@10 -- # set +x 00:31:19.456 [2024-04-18 11:18:47.886178] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:19.456 11:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.456 11:18:47 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=105632 00:31:19.456 11:18:47 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:31:19.456 11:18:47 -- host/mdns_discovery.sh@125 -- # sleep 5 00:31:20.045 [2024-04-18 11:18:48.683270] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:31:20.303 Established under name 'CDC' 00:31:20.561 [2024-04-18 11:18:49.083276] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:31:20.561 [2024-04-18 11:18:49.083327] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:31:20.561 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:31:20.561 cookie is 0 00:31:20.561 is_local: 1 00:31:20.561 our_own: 0 00:31:20.561 wide_area: 0 00:31:20.561 multicast: 1 00:31:20.561 cached: 1 00:31:20.561 [2024-04-18 11:18:49.183271] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:31:20.561 [2024-04-18 11:18:49.183311] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:31:20.561 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:31:20.561 cookie is 0 00:31:20.561 is_local: 1 00:31:20.561 our_own: 0 00:31:20.561 wide_area: 0 00:31:20.561 multicast: 1 00:31:20.561 cached: 1 00:31:21.494 [2024-04-18 11:18:50.088113] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:31:21.494 [2024-04-18 11:18:50.088149] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:31:21.494 [2024-04-18 11:18:50.088169] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:21.752 [2024-04-18 11:18:50.174308] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:31:21.752 [2024-04-18 11:18:50.187751] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:21.752 [2024-04-18 11:18:50.187776] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:21.753 [2024-04-18 11:18:50.187810] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:21.753 [2024-04-18 11:18:50.232844] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:31:21.753 [2024-04-18 11:18:50.232875] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:31:21.753 [2024-04-18 11:18:50.276015] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:31:21.753 [2024-04-18 11:18:50.337892] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:31:21.753 [2024-04-18 11:18:50.337925] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:24.280 11:18:52 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:31:24.280 11:18:52 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:31:24.280 11:18:52 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:31:24.280 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.280 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:31:24.280 11:18:52 -- host/mdns_discovery.sh@80 -- # sort 00:31:24.280 11:18:52 -- host/mdns_discovery.sh@80 -- # xargs 00:31:24.280 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.539 11:18:52 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:31:24.539 11:18:52 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:31:24.539 11:18:52 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:24.539 11:18:52 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:31:24.539 11:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.539 11:18:52 -- common/autotest_common.sh@10 -- # set +x 00:31:24.539 11:18:52 -- host/mdns_discovery.sh@76 -- # xargs 00:31:24.539 11:18:52 -- host/mdns_discovery.sh@76 -- # sort 00:31:24.539 11:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:24.539 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@68 -- # sort 00:31:24.539 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@68 -- # xargs 00:31:24.539 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:24.539 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.539 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@64 -- # xargs 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@64 -- # sort 00:31:24.539 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:31:24.539 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:31:24.539 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # xargs 00:31:24.539 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:31:24.539 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.539 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # xargs 00:31:24.539 11:18:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:31:24.797 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:24.797 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.797 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.797 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:24.797 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.797 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.797 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:31:24.797 11:18:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.797 11:18:53 -- common/autotest_common.sh@10 -- # set +x 00:31:24.797 11:18:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.797 11:18:53 -- host/mdns_discovery.sh@139 -- # sleep 1 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@64 -- # xargs 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@64 -- # sort 00:31:25.728 11:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:25.728 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:31:25.728 11:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:25.728 11:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:25.728 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:31:25.728 11:18:54 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:31:25.729 11:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:25.987 11:18:54 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:31:25.987 11:18:54 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:31:25.987 11:18:54 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:31:25.987 11:18:54 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:25.987 11:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:25.987 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:31:25.987 [2024-04-18 11:18:54.409820] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:25.987 [2024-04-18 11:18:54.410593] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:25.987 [2024-04-18 11:18:54.410655] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:25.987 [2024-04-18 11:18:54.410694] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:25.987 [2024-04-18 11:18:54.410709] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:25.987 11:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:25.987 11:18:54 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:31:25.987 11:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:25.987 11:18:54 -- common/autotest_common.sh@10 -- # set +x 00:31:25.987 [2024-04-18 11:18:54.417739] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:31:25.987 [2024-04-18 11:18:54.418543] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:25.987 [2024-04-18 11:18:54.418620] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:25.987 11:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:25.987 11:18:54 -- host/mdns_discovery.sh@149 -- # sleep 1 00:31:25.987 [2024-04-18 11:18:54.549702] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:31:25.987 [2024-04-18 11:18:54.549954] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:31:25.987 [2024-04-18 11:18:54.611046] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:31:25.987 [2024-04-18 11:18:54.611096] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:25.987 [2024-04-18 11:18:54.611113] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:25.987 [2024-04-18 11:18:54.611136] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:25.987 [2024-04-18 11:18:54.611253] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:31:25.987 [2024-04-18 11:18:54.611264] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:31:25.987 [2024-04-18 11:18:54.611269] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:31:25.987 [2024-04-18 11:18:54.611285] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:26.245 [2024-04-18 11:18:54.656842] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.245 [2024-04-18 11:18:54.656889] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:26.245 [2024-04-18 11:18:54.656941] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:31:26.245 [2024-04-18 11:18:54.656951] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:31:26.812 11:18:55 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:31:26.812 11:18:55 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:26.812 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.812 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:31:26.812 11:18:55 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:31:26.812 11:18:55 -- host/mdns_discovery.sh@68 -- # sort 00:31:26.812 11:18:55 -- host/mdns_discovery.sh@68 -- # xargs 00:31:26.812 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.070 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.070 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@64 -- # xargs 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@64 -- # sort 00:31:27.070 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:31:27.070 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # sort -n 00:31:27.070 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # xargs 00:31:27.070 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:27.070 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.070 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # sort -n 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@72 -- # xargs 00:31:27.070 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:31:27.070 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.070 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:31:27.070 11:18:55 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:31:27.070 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.330 11:18:55 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:31:27.330 11:18:55 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:31:27.330 11:18:55 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:31:27.330 11:18:55 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.330 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.330 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:31:27.330 [2024-04-18 11:18:55.750633] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:27.330 [2024-04-18 11:18:55.750689] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:27.330 [2024-04-18 11:18:55.751589] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:27.330 [2024-04-18 11:18:55.751613] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:27.330 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.330 11:18:55 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:31:27.330 11:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:27.330 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:31:27.330 [2024-04-18 11:18:55.757815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.757860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.757875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.757888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.757898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.757908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.757918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.757927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.757936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.330 [2024-04-18 11:18:55.758620] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:27.330 [2024-04-18 11:18:55.758673] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:31:27.330 [2024-04-18 11:18:55.761127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.761163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.761176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.761186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.761196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.761205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.761215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.330 [2024-04-18 11:18:55.761224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.330 [2024-04-18 11:18:55.761233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.330 11:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:27.330 11:18:55 -- host/mdns_discovery.sh@162 -- # sleep 1 00:31:27.330 [2024-04-18 11:18:55.767770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.330 [2024-04-18 11:18:55.771085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.330 [2024-04-18 11:18:55.777792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.330 [2024-04-18 11:18:55.777942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.777995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.778013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.331 [2024-04-18 11:18:55.778025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.778059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.778076] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.778085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.778096] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.331 [2024-04-18 11:18:55.778112] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.781097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.331 [2024-04-18 11:18:55.781184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.781232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.781249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.331 [2024-04-18 11:18:55.781259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.781276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.781301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.781311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.781327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.331 [2024-04-18 11:18:55.781341] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.787864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.331 [2024-04-18 11:18:55.787959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.788022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.788054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.331 [2024-04-18 11:18:55.788067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.788083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.788096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.788105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.788114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.331 [2024-04-18 11:18:55.788129] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.791150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.331 [2024-04-18 11:18:55.791246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.791294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.791311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.331 [2024-04-18 11:18:55.791322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.791337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.791362] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.791372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.791381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.331 [2024-04-18 11:18:55.791396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.797926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.331 [2024-04-18 11:18:55.798013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.798082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.798100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.331 [2024-04-18 11:18:55.798111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.798127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.798142] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.798150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.798159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.331 [2024-04-18 11:18:55.798174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.801231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.331 [2024-04-18 11:18:55.801315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.801362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.801378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.331 [2024-04-18 11:18:55.801389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.801405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.801437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.801448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.801457] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.331 [2024-04-18 11:18:55.801472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.807984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.331 [2024-04-18 11:18:55.808091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.808141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.808158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.331 [2024-04-18 11:18:55.808169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.808186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.808200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.808208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.808218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.331 [2024-04-18 11:18:55.808232] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.811285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.331 [2024-04-18 11:18:55.811369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.811418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.811434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.331 [2024-04-18 11:18:55.811444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.811460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.811492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.811502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.811526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.331 [2024-04-18 11:18:55.811541] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.818043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.331 [2024-04-18 11:18:55.818137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.818185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.818202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.331 [2024-04-18 11:18:55.818212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.818228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.331 [2024-04-18 11:18:55.818241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.331 [2024-04-18 11:18:55.818250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.331 [2024-04-18 11:18:55.818258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.331 [2024-04-18 11:18:55.818273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.331 [2024-04-18 11:18:55.821337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.331 [2024-04-18 11:18:55.821451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.821503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.331 [2024-04-18 11:18:55.821519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.331 [2024-04-18 11:18:55.821530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.331 [2024-04-18 11:18:55.821545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.821577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.821588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.821597] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.332 [2024-04-18 11:18:55.821611] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.828116] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.332 [2024-04-18 11:18:55.828228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.828281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.828303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.332 [2024-04-18 11:18:55.828314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.828330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.828343] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.828351] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.828360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.332 [2024-04-18 11:18:55.828390] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.831420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.332 [2024-04-18 11:18:55.831502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.831550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.831566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.332 [2024-04-18 11:18:55.831576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.831592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.831623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.831633] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.831642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.332 [2024-04-18 11:18:55.831657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.838214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.332 [2024-04-18 11:18:55.838314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.838361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.838378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.332 [2024-04-18 11:18:55.838388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.838404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.838418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.838426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.838435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.332 [2024-04-18 11:18:55.838449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.841473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.332 [2024-04-18 11:18:55.841560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.841607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.841624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.332 [2024-04-18 11:18:55.841635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.841651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.841683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.841693] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.841702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.332 [2024-04-18 11:18:55.841732] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.848270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.332 [2024-04-18 11:18:55.848361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.848410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.848427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.332 [2024-04-18 11:18:55.848442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.848459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.848473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.848481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.848490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.332 [2024-04-18 11:18:55.848504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.851528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.332 [2024-04-18 11:18:55.851618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.851666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.851683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.332 [2024-04-18 11:18:55.851693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.851709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.851755] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.851766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.851775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.332 [2024-04-18 11:18:55.851789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.858328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.332 [2024-04-18 11:18:55.858413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.858476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.858509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.332 [2024-04-18 11:18:55.858519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.858535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.858548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.858557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.858565] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.332 [2024-04-18 11:18:55.858580] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.861585] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.332 [2024-04-18 11:18:55.861670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.861718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.861734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.332 [2024-04-18 11:18:55.861745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.861760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.861806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.332 [2024-04-18 11:18:55.861817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.332 [2024-04-18 11:18:55.861826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.332 [2024-04-18 11:18:55.861840] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.332 [2024-04-18 11:18:55.868382] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.332 [2024-04-18 11:18:55.868495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.868541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.332 [2024-04-18 11:18:55.868557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.332 [2024-04-18 11:18:55.868567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.332 [2024-04-18 11:18:55.868582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.332 [2024-04-18 11:18:55.868596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.333 [2024-04-18 11:18:55.868604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.333 [2024-04-18 11:18:55.868612] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.333 [2024-04-18 11:18:55.868626] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.333 [2024-04-18 11:18:55.871641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.333 [2024-04-18 11:18:55.871723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.871771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.871788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.333 [2024-04-18 11:18:55.871798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.333 [2024-04-18 11:18:55.871813] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.333 [2024-04-18 11:18:55.871860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.333 [2024-04-18 11:18:55.871870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.333 [2024-04-18 11:18:55.871879] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.333 [2024-04-18 11:18:55.871904] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.333 [2024-04-18 11:18:55.878450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.333 [2024-04-18 11:18:55.878536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.878583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.878599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.333 [2024-04-18 11:18:55.878610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.333 [2024-04-18 11:18:55.878625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.333 [2024-04-18 11:18:55.878639] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.333 [2024-04-18 11:18:55.878647] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.333 [2024-04-18 11:18:55.878656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.333 [2024-04-18 11:18:55.878671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.333 [2024-04-18 11:18:55.881693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:31:27.333 [2024-04-18 11:18:55.881789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.881836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.881853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deef10 with addr=10.0.0.3, port=4420 00:31:27.333 [2024-04-18 11:18:55.881863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deef10 is same with the state(5) to be set 00:31:27.333 [2024-04-18 11:18:55.881878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deef10 (9): Bad file descriptor 00:31:27.333 [2024-04-18 11:18:55.881923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:31:27.333 [2024-04-18 11:18:55.881933] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:31:27.333 [2024-04-18 11:18:55.881942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:31:27.333 [2024-04-18 11:18:55.881957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.333 [2024-04-18 11:18:55.888504] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.333 [2024-04-18 11:18:55.888625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.888673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.333 [2024-04-18 11:18:55.888690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04760 with addr=10.0.0.2, port=4420 00:31:27.333 [2024-04-18 11:18:55.888700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04760 is same with the state(5) to be set 00:31:27.333 [2024-04-18 11:18:55.888716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04760 (9): Bad file descriptor 00:31:27.333 [2024-04-18 11:18:55.888730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.333 [2024-04-18 11:18:55.888738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.333 [2024-04-18 11:18:55.888747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.333 [2024-04-18 11:18:55.888761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.333 [2024-04-18 11:18:55.889915] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:27.333 [2024-04-18 11:18:55.889946] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:27.333 [2024-04-18 11:18:55.889969] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:27.333 [2024-04-18 11:18:55.890003] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:31:27.333 [2024-04-18 11:18:55.890018] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:31:27.333 [2024-04-18 11:18:55.890075] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:27.590 [2024-04-18 11:18:55.976044] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:27.590 [2024-04-18 11:18:55.976131] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:31:28.156 11:18:56 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:31:28.156 11:18:56 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:31:28.156 11:18:56 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.156 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:28.156 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:31:28.156 11:18:56 -- host/mdns_discovery.sh@68 -- # sort 00:31:28.156 11:18:56 -- host/mdns_discovery.sh@68 -- # xargs 00:31:28.156 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:28.414 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@64 -- # sort 00:31:28.414 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@64 -- # xargs 00:31:28.414 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # sort -n 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:28.414 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:28.414 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # xargs 00:31:28.414 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:28.414 11:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:28.414 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # sort -n 00:31:28.414 11:18:56 -- host/mdns_discovery.sh@72 -- # xargs 00:31:28.414 11:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:28.414 11:18:57 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:31:28.414 11:18:57 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:31:28.414 11:18:57 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:31:28.414 11:18:57 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:31:28.414 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:28.414 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:31:28.414 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:28.672 11:18:57 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:31:28.672 11:18:57 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:31:28.672 11:18:57 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:31:28.672 11:18:57 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:31:28.672 11:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:28.672 11:18:57 -- common/autotest_common.sh@10 -- # set +x 00:31:28.672 11:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:28.672 11:18:57 -- host/mdns_discovery.sh@172 -- # sleep 1 00:31:28.672 [2024-04-18 11:18:57.183302] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:31:29.609 11:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@80 -- # sort 00:31:29.609 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@80 -- # xargs 00:31:29.609 11:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.609 11:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.609 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@68 -- # xargs 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@68 -- # sort 00:31:29.609 11:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.609 11:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.609 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@64 -- # sort 00:31:29.609 11:18:58 -- host/mdns_discovery.sh@64 -- # xargs 00:31:29.609 11:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:31:29.867 11:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.867 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:31:29.867 11:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:29.867 11:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.867 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:31:29.867 11:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:31:29.867 11:18:58 -- common/autotest_common.sh@638 -- # local es=0 00:31:29.867 11:18:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:31:29.867 11:18:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:29.867 11:18:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:29.867 11:18:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:29.867 11:18:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:29.867 11:18:58 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:31:29.867 11:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.867 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:31:29.867 [2024-04-18 11:18:58.324691] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:31:29.867 2024/04/18 11:18:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:31:29.867 request: 00:31:29.867 { 00:31:29.867 "method": "bdev_nvme_start_mdns_discovery", 00:31:29.867 "params": { 00:31:29.867 "name": "mdns", 00:31:29.867 "svcname": "_nvme-disc._http", 00:31:29.867 "hostnqn": "nqn.2021-12.io.spdk:test" 00:31:29.867 } 00:31:29.867 } 00:31:29.867 Got JSON-RPC error response 00:31:29.867 GoRPCClient: error on JSON-RPC call 00:31:29.867 11:18:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:29.867 11:18:58 -- common/autotest_common.sh@641 -- # es=1 00:31:29.867 11:18:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:29.867 11:18:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:29.867 11:18:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:29.867 11:18:58 -- host/mdns_discovery.sh@183 -- # sleep 5 00:31:30.124 [2024-04-18 11:18:58.713476] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:31:30.381 [2024-04-18 11:18:58.813440] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:31:30.381 [2024-04-18 11:18:58.913468] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:31:30.381 [2024-04-18 11:18:58.913508] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:31:30.381 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:31:30.381 cookie is 0 00:31:30.381 is_local: 1 00:31:30.381 our_own: 0 00:31:30.381 wide_area: 0 00:31:30.381 multicast: 1 00:31:30.381 cached: 1 00:31:30.381 [2024-04-18 11:18:59.013489] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:31:30.381 [2024-04-18 11:18:59.013537] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:31:30.381 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:31:30.381 cookie is 0 00:31:30.381 is_local: 1 00:31:30.381 our_own: 0 00:31:30.381 wide_area: 0 00:31:30.381 multicast: 1 00:31:30.381 cached: 1 00:31:31.315 [2024-04-18 11:18:59.920096] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:31:31.315 [2024-04-18 11:18:59.920175] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:31:31.315 [2024-04-18 11:18:59.920210] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:31:31.573 [2024-04-18 11:19:00.007329] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:31:31.573 [2024-04-18 11:19:00.020105] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:31.573 [2024-04-18 11:19:00.020134] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:31.573 [2024-04-18 11:19:00.020165] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.573 [2024-04-18 11:19:00.075488] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:31:31.573 [2024-04-18 11:19:00.075573] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:31:31.573 [2024-04-18 11:19:00.107745] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:31:31.573 [2024-04-18 11:19:00.174317] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:31:31.573 [2024-04-18 11:19:00.174360] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:31:34.864 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.864 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@80 -- # sort 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@80 -- # xargs 00:31:34.864 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@76 -- # sort 00:31:34.864 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.864 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@76 -- # xargs 00:31:34.864 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.864 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.864 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@64 -- # sort 00:31:34.864 11:19:03 -- host/mdns_discovery.sh@64 -- # xargs 00:31:35.121 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.121 11:19:03 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:35.121 11:19:03 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:35.121 11:19:03 -- common/autotest_common.sh@638 -- # local es=0 00:31:35.121 11:19:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:35.121 11:19:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:35.121 11:19:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:35.121 11:19:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:35.121 11:19:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:35.121 11:19:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:31:35.121 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.121 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:31:35.121 [2024-04-18 11:19:03.529458] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:31:35.122 2024/04/18 11:19:03 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:31:35.122 request: 00:31:35.122 { 00:31:35.122 "method": "bdev_nvme_start_mdns_discovery", 00:31:35.122 "params": { 00:31:35.122 "name": "cdc", 00:31:35.122 "svcname": "_nvme-disc._tcp", 00:31:35.122 "hostnqn": "nqn.2021-12.io.spdk:test" 00:31:35.122 } 00:31:35.122 } 00:31:35.122 Got JSON-RPC error response 00:31:35.122 GoRPCClient: error on JSON-RPC call 00:31:35.122 11:19:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:35.122 11:19:03 -- common/autotest_common.sh@641 -- # es=1 00:31:35.122 11:19:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:35.122 11:19:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:35.122 11:19:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:35.122 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.122 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@76 -- # sort 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@76 -- # xargs 00:31:35.122 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.122 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.122 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@64 -- # sort 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@64 -- # xargs 00:31:35.122 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:31:35.122 11:19:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.122 11:19:03 -- common/autotest_common.sh@10 -- # set +x 00:31:35.122 11:19:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@197 -- # kill 105552 00:31:35.122 11:19:03 -- host/mdns_discovery.sh@200 -- # wait 105552 00:31:35.380 [2024-04-18 11:19:03.769378] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:31:35.380 11:19:03 -- host/mdns_discovery.sh@201 -- # kill 105632 00:31:35.380 Got SIGTERM, quitting. 00:31:35.380 11:19:03 -- host/mdns_discovery.sh@202 -- # kill 105586 00:31:35.380 11:19:03 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:31:35.380 11:19:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:35.380 11:19:03 -- nvmf/common.sh@117 -- # sync 00:31:35.380 Got SIGTERM, quitting. 00:31:35.380 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:31:35.380 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:31:35.380 avahi-daemon 0.8 exiting. 00:31:35.380 11:19:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:35.380 11:19:03 -- nvmf/common.sh@120 -- # set +e 00:31:35.380 11:19:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:35.380 11:19:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:35.380 rmmod nvme_tcp 00:31:35.380 rmmod nvme_fabrics 00:31:35.380 rmmod nvme_keyring 00:31:35.380 11:19:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:35.380 11:19:03 -- nvmf/common.sh@124 -- # set -e 00:31:35.380 11:19:03 -- nvmf/common.sh@125 -- # return 0 00:31:35.380 11:19:03 -- nvmf/common.sh@478 -- # '[' -n 105502 ']' 00:31:35.380 11:19:03 -- nvmf/common.sh@479 -- # killprocess 105502 00:31:35.380 11:19:03 -- common/autotest_common.sh@936 -- # '[' -z 105502 ']' 00:31:35.380 11:19:03 -- common/autotest_common.sh@940 -- # kill -0 105502 00:31:35.380 11:19:03 -- common/autotest_common.sh@941 -- # uname 00:31:35.380 11:19:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:35.380 11:19:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105502 00:31:35.380 11:19:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:35.380 11:19:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:35.380 killing process with pid 105502 00:31:35.380 11:19:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105502' 00:31:35.380 11:19:03 -- common/autotest_common.sh@955 -- # kill 105502 00:31:35.380 11:19:03 -- common/autotest_common.sh@960 -- # wait 105502 00:31:35.639 11:19:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:35.639 11:19:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:35.639 11:19:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:35.639 11:19:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:35.639 11:19:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:35.639 11:19:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.639 11:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.639 11:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.639 11:19:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:35.639 ************************************ 00:31:35.639 END TEST nvmf_mdns_discovery 00:31:35.639 ************************************ 00:31:35.639 00:31:35.639 real 0m20.643s 00:31:35.639 user 0m40.416s 00:31:35.639 sys 0m2.063s 00:31:35.639 11:19:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:35.639 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:31:35.897 11:19:04 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:31:35.897 11:19:04 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:31:35.897 11:19:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:35.897 11:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:35.897 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:31:35.897 ************************************ 00:31:35.897 START TEST nvmf_multipath 00:31:35.897 ************************************ 00:31:35.897 11:19:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:31:35.897 * Looking for test storage... 00:31:35.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:35.897 11:19:04 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:35.897 11:19:04 -- nvmf/common.sh@7 -- # uname -s 00:31:35.897 11:19:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.897 11:19:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.897 11:19:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.897 11:19:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.897 11:19:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.897 11:19:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.897 11:19:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.897 11:19:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.897 11:19:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.897 11:19:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.897 11:19:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:31:35.897 11:19:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:31:35.897 11:19:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.897 11:19:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.897 11:19:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:35.897 11:19:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.897 11:19:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:35.897 11:19:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.897 11:19:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.897 11:19:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.897 11:19:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.897 11:19:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.897 11:19:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.897 11:19:04 -- paths/export.sh@5 -- # export PATH 00:31:35.897 11:19:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.897 11:19:04 -- nvmf/common.sh@47 -- # : 0 00:31:35.897 11:19:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:35.897 11:19:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:35.897 11:19:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.897 11:19:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.897 11:19:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.897 11:19:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:35.897 11:19:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:35.897 11:19:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:35.897 11:19:04 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:35.897 11:19:04 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:35.897 11:19:04 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:35.897 11:19:04 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:31:35.897 11:19:04 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:35.897 11:19:04 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:35.897 11:19:04 -- host/multipath.sh@30 -- # nvmftestinit 00:31:35.897 11:19:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:35.897 11:19:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.897 11:19:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:35.897 11:19:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:35.897 11:19:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:35.897 11:19:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.897 11:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.897 11:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.897 11:19:04 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:35.897 11:19:04 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:35.897 11:19:04 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:35.897 11:19:04 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:35.897 11:19:04 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:35.897 11:19:04 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:35.897 11:19:04 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.897 11:19:04 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.897 11:19:04 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:35.897 11:19:04 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:35.897 11:19:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:35.897 11:19:04 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:35.897 11:19:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:35.897 11:19:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.897 11:19:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:35.897 11:19:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:35.897 11:19:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:35.897 11:19:04 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:35.897 11:19:04 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:35.897 11:19:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:35.897 Cannot find device "nvmf_tgt_br" 00:31:35.897 11:19:04 -- nvmf/common.sh@155 -- # true 00:31:35.897 11:19:04 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:35.897 Cannot find device "nvmf_tgt_br2" 00:31:35.897 11:19:04 -- nvmf/common.sh@156 -- # true 00:31:35.898 11:19:04 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:35.898 11:19:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:35.898 Cannot find device "nvmf_tgt_br" 00:31:35.898 11:19:04 -- nvmf/common.sh@158 -- # true 00:31:35.898 11:19:04 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:36.206 Cannot find device "nvmf_tgt_br2" 00:31:36.206 11:19:04 -- nvmf/common.sh@159 -- # true 00:31:36.206 11:19:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:36.206 11:19:04 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:36.206 11:19:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:36.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:36.206 11:19:04 -- nvmf/common.sh@162 -- # true 00:31:36.206 11:19:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:36.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:36.206 11:19:04 -- nvmf/common.sh@163 -- # true 00:31:36.206 11:19:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:36.206 11:19:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:36.206 11:19:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:36.206 11:19:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:36.206 11:19:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:36.206 11:19:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:36.206 11:19:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:36.206 11:19:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:36.206 11:19:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:36.206 11:19:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:36.206 11:19:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:36.206 11:19:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:36.206 11:19:04 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:36.206 11:19:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:36.206 11:19:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:36.206 11:19:04 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:36.206 11:19:04 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:36.206 11:19:04 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:36.206 11:19:04 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:36.206 11:19:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:36.206 11:19:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:36.206 11:19:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:36.206 11:19:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:36.206 11:19:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:36.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:31:36.206 00:31:36.206 --- 10.0.0.2 ping statistics --- 00:31:36.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.206 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:31:36.206 11:19:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:36.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:36.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:31:36.206 00:31:36.206 --- 10.0.0.3 ping statistics --- 00:31:36.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.206 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:31:36.206 11:19:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:36.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:36.206 00:31:36.206 --- 10.0.0.1 ping statistics --- 00:31:36.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.206 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:36.206 11:19:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.206 11:19:04 -- nvmf/common.sh@422 -- # return 0 00:31:36.206 11:19:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:36.207 11:19:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.207 11:19:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:36.207 11:19:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:36.207 11:19:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.207 11:19:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:36.207 11:19:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:36.465 11:19:04 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:31:36.465 11:19:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:36.465 11:19:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:36.465 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:31:36.465 11:19:04 -- nvmf/common.sh@470 -- # nvmfpid=106146 00:31:36.465 11:19:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:36.465 11:19:04 -- nvmf/common.sh@471 -- # waitforlisten 106146 00:31:36.465 11:19:04 -- common/autotest_common.sh@817 -- # '[' -z 106146 ']' 00:31:36.465 11:19:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.465 11:19:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:36.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.465 11:19:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.465 11:19:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:36.465 11:19:04 -- common/autotest_common.sh@10 -- # set +x 00:31:36.465 [2024-04-18 11:19:04.890473] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:31:36.465 [2024-04-18 11:19:04.890574] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.465 [2024-04-18 11:19:05.037829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:36.722 [2024-04-18 11:19:05.140207] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.722 [2024-04-18 11:19:05.140277] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.722 [2024-04-18 11:19:05.140291] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.723 [2024-04-18 11:19:05.140302] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.723 [2024-04-18 11:19:05.140311] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.723 [2024-04-18 11:19:05.140414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.723 [2024-04-18 11:19:05.140429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.288 11:19:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:37.288 11:19:05 -- common/autotest_common.sh@850 -- # return 0 00:31:37.288 11:19:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:37.288 11:19:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:37.288 11:19:05 -- common/autotest_common.sh@10 -- # set +x 00:31:37.546 11:19:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.546 11:19:05 -- host/multipath.sh@33 -- # nvmfapp_pid=106146 00:31:37.546 11:19:05 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:37.804 [2024-04-18 11:19:06.208338] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.804 11:19:06 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:38.062 Malloc0 00:31:38.062 11:19:06 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:38.320 11:19:06 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:38.578 11:19:07 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.836 [2024-04-18 11:19:07.298115] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.836 11:19:07 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:39.094 [2024-04-18 11:19:07.534277] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:39.094 11:19:07 -- host/multipath.sh@44 -- # bdevperf_pid=106244 00:31:39.094 11:19:07 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:39.094 11:19:07 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:39.094 11:19:07 -- host/multipath.sh@47 -- # waitforlisten 106244 /var/tmp/bdevperf.sock 00:31:39.094 11:19:07 -- common/autotest_common.sh@817 -- # '[' -z 106244 ']' 00:31:39.094 11:19:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:39.094 11:19:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:39.094 11:19:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:39.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:39.094 11:19:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:39.094 11:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:40.028 11:19:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:40.028 11:19:08 -- common/autotest_common.sh@850 -- # return 0 00:31:40.028 11:19:08 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:40.286 11:19:08 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:40.852 Nvme0n1 00:31:40.852 11:19:09 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:41.111 Nvme0n1 00:31:41.111 11:19:09 -- host/multipath.sh@78 -- # sleep 1 00:31:41.111 11:19:09 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:42.047 11:19:10 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:31:42.047 11:19:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:42.305 11:19:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:42.563 11:19:11 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:31:42.563 11:19:11 -- host/multipath.sh@65 -- # dtrace_pid=106337 00:31:42.564 11:19:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:42.564 11:19:11 -- host/multipath.sh@66 -- # sleep 6 00:31:49.132 11:19:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:49.132 11:19:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:31:49.132 11:19:17 -- host/multipath.sh@67 -- # active_port=4421 00:31:49.132 11:19:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:49.132 Attaching 4 probes... 00:31:49.132 @path[10.0.0.2, 4421]: 16479 00:31:49.132 @path[10.0.0.2, 4421]: 16945 00:31:49.132 @path[10.0.0.2, 4421]: 17180 00:31:49.132 @path[10.0.0.2, 4421]: 16624 00:31:49.132 @path[10.0.0.2, 4421]: 16861 00:31:49.132 11:19:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:49.132 11:19:17 -- host/multipath.sh@69 -- # sed -n 1p 00:31:49.132 11:19:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:31:49.132 11:19:17 -- host/multipath.sh@69 -- # port=4421 00:31:49.132 11:19:17 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:31:49.132 11:19:17 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:31:49.132 11:19:17 -- host/multipath.sh@72 -- # kill 106337 00:31:49.132 11:19:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:49.132 11:19:17 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:31:49.132 11:19:17 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:49.132 11:19:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:49.390 11:19:17 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:31:49.390 11:19:17 -- host/multipath.sh@65 -- # dtrace_pid=106462 00:31:49.390 11:19:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:49.390 11:19:17 -- host/multipath.sh@66 -- # sleep 6 00:31:55.951 11:19:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:31:55.951 11:19:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:31:55.951 11:19:24 -- host/multipath.sh@67 -- # active_port=4420 00:31:55.951 11:19:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:55.951 Attaching 4 probes... 00:31:55.951 @path[10.0.0.2, 4420]: 16981 00:31:55.951 @path[10.0.0.2, 4420]: 16847 00:31:55.951 @path[10.0.0.2, 4420]: 17047 00:31:55.951 @path[10.0.0.2, 4420]: 16989 00:31:55.951 @path[10.0.0.2, 4420]: 17155 00:31:55.951 11:19:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:31:55.951 11:19:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:31:55.951 11:19:24 -- host/multipath.sh@69 -- # sed -n 1p 00:31:55.951 11:19:24 -- host/multipath.sh@69 -- # port=4420 00:31:55.951 11:19:24 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:31:55.951 11:19:24 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:31:55.951 11:19:24 -- host/multipath.sh@72 -- # kill 106462 00:31:55.951 11:19:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:55.951 11:19:24 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:31:55.951 11:19:24 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:55.951 11:19:24 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:56.209 11:19:24 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:31:56.209 11:19:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:31:56.209 11:19:24 -- host/multipath.sh@65 -- # dtrace_pid=106597 00:31:56.209 11:19:24 -- host/multipath.sh@66 -- # sleep 6 00:32:02.825 11:19:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:02.825 11:19:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:02.825 11:19:30 -- host/multipath.sh@67 -- # active_port=4421 00:32:02.825 11:19:30 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:02.825 Attaching 4 probes... 00:32:02.825 @path[10.0.0.2, 4421]: 12577 00:32:02.825 @path[10.0.0.2, 4421]: 16790 00:32:02.825 @path[10.0.0.2, 4421]: 16721 00:32:02.825 @path[10.0.0.2, 4421]: 16367 00:32:02.825 @path[10.0.0.2, 4421]: 16769 00:32:02.825 11:19:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:02.825 11:19:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:02.825 11:19:30 -- host/multipath.sh@69 -- # sed -n 1p 00:32:02.825 11:19:30 -- host/multipath.sh@69 -- # port=4421 00:32:02.825 11:19:30 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:02.825 11:19:30 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:02.825 11:19:30 -- host/multipath.sh@72 -- # kill 106597 00:32:02.825 11:19:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:02.825 11:19:30 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:32:02.825 11:19:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:02.825 11:19:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:03.083 11:19:31 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:32:03.083 11:19:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:03.083 11:19:31 -- host/multipath.sh@65 -- # dtrace_pid=106729 00:32:03.083 11:19:31 -- host/multipath.sh@66 -- # sleep 6 00:32:09.646 11:19:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:09.646 11:19:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:32:09.646 11:19:37 -- host/multipath.sh@67 -- # active_port= 00:32:09.646 11:19:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:09.646 Attaching 4 probes... 00:32:09.646 00:32:09.646 00:32:09.646 00:32:09.646 00:32:09.646 00:32:09.646 11:19:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:09.646 11:19:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:09.646 11:19:37 -- host/multipath.sh@69 -- # sed -n 1p 00:32:09.646 11:19:37 -- host/multipath.sh@69 -- # port= 00:32:09.646 11:19:37 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:32:09.646 11:19:37 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:32:09.646 11:19:37 -- host/multipath.sh@72 -- # kill 106729 00:32:09.646 11:19:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:09.646 11:19:37 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:32:09.646 11:19:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:09.646 11:19:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:09.904 11:19:38 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:32:09.904 11:19:38 -- host/multipath.sh@65 -- # dtrace_pid=106861 00:32:09.904 11:19:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:09.904 11:19:38 -- host/multipath.sh@66 -- # sleep 6 00:32:16.459 11:19:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:16.459 11:19:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:16.459 11:19:44 -- host/multipath.sh@67 -- # active_port=4421 00:32:16.459 11:19:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:16.459 Attaching 4 probes... 00:32:16.459 @path[10.0.0.2, 4421]: 15737 00:32:16.459 @path[10.0.0.2, 4421]: 16443 00:32:16.459 @path[10.0.0.2, 4421]: 15853 00:32:16.459 @path[10.0.0.2, 4421]: 16468 00:32:16.459 @path[10.0.0.2, 4421]: 16349 00:32:16.459 11:19:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:16.459 11:19:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:16.459 11:19:44 -- host/multipath.sh@69 -- # sed -n 1p 00:32:16.459 11:19:44 -- host/multipath.sh@69 -- # port=4421 00:32:16.459 11:19:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:16.459 11:19:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:16.459 11:19:44 -- host/multipath.sh@72 -- # kill 106861 00:32:16.459 11:19:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:16.459 11:19:44 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:16.459 [2024-04-18 11:19:44.989355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.459 [2024-04-18 11:19:44.989893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.989994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.990002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.990011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.990019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 [2024-04-18 11:19:44.990027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155430 is same with the state(5) to be set 00:32:16.460 11:19:45 -- host/multipath.sh@101 -- # sleep 1 00:32:17.393 11:19:46 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:32:17.393 11:19:46 -- host/multipath.sh@65 -- # dtrace_pid=106991 00:32:17.393 11:19:46 -- host/multipath.sh@66 -- # sleep 6 00:32:17.393 11:19:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:23.951 11:19:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:23.951 11:19:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:32:23.951 11:19:52 -- host/multipath.sh@67 -- # active_port=4420 00:32:23.951 11:19:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:23.951 Attaching 4 probes... 00:32:23.951 @path[10.0.0.2, 4420]: 15957 00:32:23.951 @path[10.0.0.2, 4420]: 16528 00:32:23.951 @path[10.0.0.2, 4420]: 16690 00:32:23.951 @path[10.0.0.2, 4420]: 16589 00:32:23.951 @path[10.0.0.2, 4420]: 16462 00:32:23.951 11:19:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:23.951 11:19:52 -- host/multipath.sh@69 -- # sed -n 1p 00:32:23.951 11:19:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:23.951 11:19:52 -- host/multipath.sh@69 -- # port=4420 00:32:23.951 11:19:52 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:32:23.951 11:19:52 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:32:23.951 11:19:52 -- host/multipath.sh@72 -- # kill 106991 00:32:23.951 11:19:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:23.951 11:19:52 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:23.951 [2024-04-18 11:19:52.572371] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:24.208 11:19:52 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:24.465 11:19:52 -- host/multipath.sh@111 -- # sleep 6 00:32:31.064 11:19:58 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:32:31.064 11:19:58 -- host/multipath.sh@65 -- # dtrace_pid=107184 00:32:31.064 11:19:58 -- host/multipath.sh@66 -- # sleep 6 00:32:31.064 11:19:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:36.338 11:20:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:36.338 11:20:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:36.596 11:20:05 -- host/multipath.sh@67 -- # active_port=4421 00:32:36.596 11:20:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:36.596 Attaching 4 probes... 00:32:36.596 @path[10.0.0.2, 4421]: 15957 00:32:36.596 @path[10.0.0.2, 4421]: 16352 00:32:36.596 @path[10.0.0.2, 4421]: 15811 00:32:36.596 @path[10.0.0.2, 4421]: 16281 00:32:36.596 @path[10.0.0.2, 4421]: 16043 00:32:36.596 11:20:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:36.596 11:20:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:36.596 11:20:05 -- host/multipath.sh@69 -- # sed -n 1p 00:32:36.596 11:20:05 -- host/multipath.sh@69 -- # port=4421 00:32:36.596 11:20:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:36.596 11:20:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:36.596 11:20:05 -- host/multipath.sh@72 -- # kill 107184 00:32:36.596 11:20:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:36.596 11:20:05 -- host/multipath.sh@114 -- # killprocess 106244 00:32:36.596 11:20:05 -- common/autotest_common.sh@936 -- # '[' -z 106244 ']' 00:32:36.596 11:20:05 -- common/autotest_common.sh@940 -- # kill -0 106244 00:32:36.596 11:20:05 -- common/autotest_common.sh@941 -- # uname 00:32:36.596 11:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:36.596 11:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106244 00:32:36.596 killing process with pid 106244 00:32:36.596 11:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:32:36.596 11:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:32:36.596 11:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 106244' 00:32:36.596 11:20:05 -- common/autotest_common.sh@955 -- # kill 106244 00:32:36.596 11:20:05 -- common/autotest_common.sh@960 -- # wait 106244 00:32:36.876 Connection closed with partial response: 00:32:36.876 00:32:36.876 00:32:36.876 11:20:05 -- host/multipath.sh@116 -- # wait 106244 00:32:36.876 11:20:05 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:32:36.876 [2024-04-18 11:19:07.611176] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:32:36.876 [2024-04-18 11:19:07.611324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106244 ] 00:32:36.876 [2024-04-18 11:19:07.753383] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.876 [2024-04-18 11:19:07.848559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.876 Running I/O for 90 seconds... 00:32:36.876 [2024-04-18 11:19:17.905399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.905913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.905960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.876 [2024-04-18 11:19:17.906623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.876 [2024-04-18 11:19:17.906642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.906656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.906675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.906689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.906708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.906722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.906741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.906754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.906773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.906788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.906808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.906822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.907972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.907994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.877 [2024-04-18 11:19:17.908575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.877 [2024-04-18 11:19:17.908589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.908965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.908979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.909021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.909074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.909683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.878 [2024-04-18 11:19:17.909716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.878 [2024-04-18 11:19:17.909909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.878 [2024-04-18 11:19:17.909923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.909959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.909973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.910835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:17.910862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.910887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.910903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.910923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.910937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.910973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.910987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:17.911873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.879 [2024-04-18 11:19:17.911886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.432785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:24.432840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.432897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:24.432918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.432941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:24.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.432979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:24.432994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.433015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:24.433046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.433101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:24.433118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.433138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.879 [2024-04-18 11:19:24.433153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.879 [2024-04-18 11:19:24.433174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.433968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.433990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.880 [2024-04-18 11:19:24.434612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.880 [2024-04-18 11:19:24.434627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.434967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.434982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.435020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.881 [2024-04-18 11:19:24.435078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.881 [2024-04-18 11:19:24.435973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.881 [2024-04-18 11:19:24.435987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.436958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.436975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.882 [2024-04-18 11:19:24.437649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.882 [2024-04-18 11:19:24.437676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.437718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.437761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.437802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.437844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.437887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.437929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.437971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.437992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.438058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.438106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.438149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.438191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:24.438234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:24.438559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.883 [2024-04-18 11:19:24.438584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.883 [2024-04-18 11:19:31.511683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.883 [2024-04-18 11:19:31.511698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.511718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.884 [2024-04-18 11:19:31.511733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.511753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.884 [2024-04-18 11:19:31.511768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.884 [2024-04-18 11:19:31.512091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.512970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.512985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.884 [2024-04-18 11:19:31.513394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.884 [2024-04-18 11:19:31.513408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.513444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.513479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.513515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.513550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.513591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.513626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.513664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.513699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.513741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.513778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.513814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.513835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.513851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.885 [2024-04-18 11:19:31.514347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.514975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.514996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.515011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.885 [2024-04-18 11:19:31.515046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.885 [2024-04-18 11:19:31.515064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.515101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.515146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.515193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.515233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.515268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.515985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.515999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.516055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.516096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.516131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.886 [2024-04-18 11:19:31.516428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.516463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.516498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.886 [2024-04-18 11:19:31.516545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.886 [2024-04-18 11:19:31.516566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.516966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.516981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.517978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.517999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.518013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.887 [2024-04-18 11:19:31.518066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.887 [2024-04-18 11:19:31.518664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.887 [2024-04-18 11:19:31.518685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.518971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.518985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.519557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.519601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.519636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.519671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.519705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.519740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.519761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.519775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.520354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.888 [2024-04-18 11:19:31.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.520432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.520468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.520514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.520551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.520586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.520621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.888 [2024-04-18 11:19:31.520656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.888 [2024-04-18 11:19:31.520678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.520979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.520999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.889 [2024-04-18 11:19:31.521275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.521812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.521839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.534146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.534208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.534228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.534251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.534267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.534288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.534302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.534323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.534359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.534373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.534394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.889 [2024-04-18 11:19:31.534408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.889 [2024-04-18 11:19:31.534429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.890 [2024-04-18 11:19:31.534739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.534968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.534982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.535300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.535315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.890 [2024-04-18 11:19:31.536970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.890 [2024-04-18 11:19:31.536999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.891 [2024-04-18 11:19:31.537019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.537960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.537989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.538967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.538996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.891 [2024-04-18 11:19:31.539016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.891 [2024-04-18 11:19:31.539062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.539085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.539114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.539134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.539165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.539201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.539232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.539253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.539283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.539303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.539332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.539352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.539383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.539413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.540267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.540328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.540378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.540978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.540999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.892 [2024-04-18 11:19:31.541639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.541688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.541738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.541788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.541837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.892 [2024-04-18 11:19:31.541887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.892 [2024-04-18 11:19:31.541917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.541941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.541970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.541990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.542838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.542887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.542937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.542967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.542987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.543050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.543108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.543157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.543225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.893 [2024-04-18 11:19:31.543275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.893 [2024-04-18 11:19:31.543938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.893 [2024-04-18 11:19:31.543968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.543988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.894 [2024-04-18 11:19:31.545610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.545659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.545709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.545767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.545818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.545867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.545917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.545966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.545995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.894 [2024-04-18 11:19:31.546728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.894 [2024-04-18 11:19:31.546747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.546777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.546797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.546826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.546846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.546876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.546895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.546925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.546945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.546974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.547719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.547753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.547788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.547823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.547844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.547859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.548464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.548509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.548545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.895 [2024-04-18 11:19:31.548581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.548974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.548994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.895 [2024-04-18 11:19:31.549008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.895 [2024-04-18 11:19:31.549043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.549452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.549976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.549990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.896 [2024-04-18 11:19:31.550290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.550324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.550358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.896 [2024-04-18 11:19:31.550379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.896 [2024-04-18 11:19:31.550393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.550427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.550462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.550504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.550538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.550573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.550965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.550986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.551021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.551775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.551828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.551865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.551900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.551936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.551970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.551985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.552020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.552073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.552120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.552157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.552192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.897 [2024-04-18 11:19:31.552228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.897 [2024-04-18 11:19:31.552528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.897 [2024-04-18 11:19:31.552542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.552962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.898 [2024-04-18 11:19:31.553668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.898 [2024-04-18 11:19:31.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.898 [2024-04-18 11:19:31.553737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.553758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.898 [2024-04-18 11:19:31.553773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.554317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.898 [2024-04-18 11:19:31.554342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.554369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.898 [2024-04-18 11:19:31.554385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.554405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.898 [2024-04-18 11:19:31.554437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.554460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.898 [2024-04-18 11:19:31.554475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.898 [2024-04-18 11:19:31.554496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.899 [2024-04-18 11:19:31.554510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.899 [2024-04-18 11:19:31.554794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.899 [2024-04-18 11:19:31.554814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.554828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.554848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.554869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.554891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.554911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.554932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.554947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.554967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.554982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.900 [2024-04-18 11:19:31.555416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.555984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.555998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.556019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.556045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.556068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.556083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.556104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.556118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.556139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.556153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.900 [2024-04-18 11:19:31.556174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.900 [2024-04-18 11:19:31.556189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.901 [2024-04-18 11:19:31.556546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.556954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.556969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.557969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.557983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.558004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.558018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.558054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.558072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.558092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.558107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.558127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.901 [2024-04-18 11:19:31.558141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.901 [2024-04-18 11:19:31.558162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.902 [2024-04-18 11:19:31.558176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.558984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.558998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.559019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.559044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.559066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.559081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.559102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.559115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.559136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.559155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.559188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.559213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.566785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.566823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.566848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.566863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.566884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.566899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.566920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.566934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.566955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.566969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.566990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.567004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.567025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.567056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.567079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.567093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.567115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.567129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.567150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.902 [2024-04-18 11:19:31.567168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.902 [2024-04-18 11:19:31.567203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.567220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.567242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.567256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.567291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.567307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.567329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.567344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.567956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.567986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.568057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.568103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.568139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.568174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.568209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.568980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.568993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.569039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.903 [2024-04-18 11:19:31.569079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.569114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.569148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.569182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.569217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.903 [2024-04-18 11:19:31.569259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.903 [2024-04-18 11:19:31.569281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.569900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.569934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.569968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.569989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.570003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.570051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.570088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.570122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.570172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.904 [2024-04-18 11:19:31.570207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.904 [2024-04-18 11:19:31.570478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.904 [2024-04-18 11:19:31.570492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.570512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.570526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.570547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.570561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.570582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.570613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.905 [2024-04-18 11:19:31.571911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.571954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.571997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.905 [2024-04-18 11:19:31.572875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.905 [2024-04-18 11:19:31.572892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.572927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.572945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.572971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.572989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.573716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.573743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.573761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.574483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.574534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.574579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.574623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.574667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.574724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.906 [2024-04-18 11:19:31.574769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.574812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.574855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.574898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.574941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.574976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.574993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.906 [2024-04-18 11:19:31.575378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.906 [2024-04-18 11:19:31.575395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.575893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.575937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.575962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.575980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.907 [2024-04-18 11:19:31.576917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.576968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.576994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.577012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.577051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.577072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.577099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.577116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.907 [2024-04-18 11:19:31.577142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.907 [2024-04-18 11:19:31.577160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.577203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.577247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.577290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.577725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.577743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.908 [2024-04-18 11:19:31.578865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.578921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.578953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.578971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.579004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.579022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.579071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.579090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.579131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.579150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.579194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.579215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.579247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.579265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:36.908 [2024-04-18 11:19:31.579297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.908 [2024-04-18 11:19:31.579315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.579955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.579973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.580922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:31.580940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:31.581159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.909 [2024-04-18 11:19:31.581199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:44.991172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:44.991228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:44.991254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:44.991270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:44.991286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:44.991300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.909 [2024-04-18 11:19:44.991315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.909 [2024-04-18 11:19:44.991329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.991983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.991997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.910 [2024-04-18 11:19:44.992489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.910 [2024-04-18 11:19:44.992503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.911 [2024-04-18 11:19:44.992517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.911 [2024-04-18 11:19:44.992545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.911 [2024-04-18 11:19:44.992573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.911 [2024-04-18 11:19:44.992601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.911 [2024-04-18 11:19:44.992629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.992965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.992978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.911 [2024-04-18 11:19:44.993679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.911 [2024-04-18 11:19:44.993693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.993985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.993998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.994043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.912 [2024-04-18 11:19:44.994073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118528 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118536 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118544 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118552 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118560 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118568 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118576 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118584 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118592 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118600 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118608 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118616 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118624 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118632 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118640 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.912 [2024-04-18 11:19:44.994849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.912 [2024-04-18 11:19:44.994858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118648 len:8 PRP1 0x0 PRP2 0x0 00:32:36.912 [2024-04-18 11:19:44.994871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.912 [2024-04-18 11:19:44.994883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:44.994893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:44.994902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118656 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:44.994915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:44.994928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:44.994937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:44.994947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118664 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:44.994959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:44.994972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:44.994987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:44.994998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118672 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:44.995010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:44.995023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:44.995044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:44.995056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118680 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:44.995068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:44.995081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:44.995090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:44.995100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118688 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:44.995113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:44.995130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:44.995140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:44.995150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118696 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:44.995163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.004943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.004975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.004988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118704 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118712 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118720 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118728 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118736 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118744 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118752 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118760 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118768 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:36.913 [2024-04-18 11:19:45.005418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:36.913 [2024-04-18 11:19:45.005428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118776 len:8 PRP1 0x0 PRP2 0x0 00:32:36.913 [2024-04-18 11:19:45.005448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005510] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ad970 was disconnected and freed. reset controller. 00:32:36.913 [2024-04-18 11:19:45.005614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.913 [2024-04-18 11:19:45.005639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.913 [2024-04-18 11:19:45.005668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.913 [2024-04-18 11:19:45.005706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.913 [2024-04-18 11:19:45.005733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.913 [2024-04-18 11:19:45.005747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177fac0 is same with the state(5) to be set 00:32:36.913 [2024-04-18 11:19:45.007196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.913 [2024-04-18 11:19:45.007235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fac0 (9): Bad file descriptor 00:32:36.913 [2024-04-18 11:19:45.007348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.913 [2024-04-18 11:19:45.007409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.913 [2024-04-18 11:19:45.007432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177fac0 with addr=10.0.0.2, port=4421 00:32:36.913 [2024-04-18 11:19:45.007451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177fac0 is same with the state(5) to be set 00:32:36.913 [2024-04-18 11:19:45.007475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fac0 (9): Bad file descriptor 00:32:36.913 [2024-04-18 11:19:45.007497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:36.913 [2024-04-18 11:19:45.007512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:36.913 [2024-04-18 11:19:45.007526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:36.914 [2024-04-18 11:19:45.007581] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.914 [2024-04-18 11:19:45.007604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:36.914 [2024-04-18 11:19:55.098021] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:36.914 Received shutdown signal, test time was about 55.499426 seconds 00:32:36.914 00:32:36.914 Latency(us) 00:32:36.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.914 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:36.914 Verification LBA range: start 0x0 length 0x4000 00:32:36.914 Nvme0n1 : 55.50 7080.99 27.66 0.00 0.00 18044.82 1854.37 7107438.78 00:32:36.914 =================================================================================================================== 00:32:36.914 Total : 7080.99 27.66 0.00 0.00 18044.82 1854.37 7107438.78 00:32:36.914 11:20:05 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:37.171 11:20:05 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:32:37.171 11:20:05 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:32:37.171 11:20:05 -- host/multipath.sh@125 -- # nvmftestfini 00:32:37.171 11:20:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:37.171 11:20:05 -- nvmf/common.sh@117 -- # sync 00:32:37.171 11:20:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:37.171 11:20:05 -- nvmf/common.sh@120 -- # set +e 00:32:37.171 11:20:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:37.171 11:20:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:37.171 rmmod nvme_tcp 00:32:37.171 rmmod nvme_fabrics 00:32:37.171 rmmod nvme_keyring 00:32:37.171 11:20:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:37.171 11:20:05 -- nvmf/common.sh@124 -- # set -e 00:32:37.171 11:20:05 -- nvmf/common.sh@125 -- # return 0 00:32:37.171 11:20:05 -- nvmf/common.sh@478 -- # '[' -n 106146 ']' 00:32:37.171 11:20:05 -- nvmf/common.sh@479 -- # killprocess 106146 00:32:37.171 11:20:05 -- common/autotest_common.sh@936 -- # '[' -z 106146 ']' 00:32:37.171 11:20:05 -- common/autotest_common.sh@940 -- # kill -0 106146 00:32:37.171 11:20:05 -- common/autotest_common.sh@941 -- # uname 00:32:37.171 11:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:37.171 11:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106146 00:32:37.171 killing process with pid 106146 00:32:37.171 11:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:37.171 11:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:37.171 11:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 106146' 00:32:37.171 11:20:05 -- common/autotest_common.sh@955 -- # kill 106146 00:32:37.171 11:20:05 -- common/autotest_common.sh@960 -- # wait 106146 00:32:37.430 11:20:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:37.430 11:20:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:37.430 11:20:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:37.430 11:20:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.430 11:20:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:37.430 11:20:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.430 11:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.430 11:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.688 11:20:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:37.688 00:32:37.688 real 1m1.717s 00:32:37.688 user 2m55.789s 00:32:37.688 sys 0m13.277s 00:32:37.688 11:20:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:37.688 ************************************ 00:32:37.688 END TEST nvmf_multipath 00:32:37.688 ************************************ 00:32:37.688 11:20:06 -- common/autotest_common.sh@10 -- # set +x 00:32:37.688 11:20:06 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:32:37.688 11:20:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:37.688 11:20:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:37.688 11:20:06 -- common/autotest_common.sh@10 -- # set +x 00:32:37.688 ************************************ 00:32:37.688 START TEST nvmf_timeout 00:32:37.688 ************************************ 00:32:37.688 11:20:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:32:37.688 * Looking for test storage... 00:32:37.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:37.689 11:20:06 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:37.689 11:20:06 -- nvmf/common.sh@7 -- # uname -s 00:32:37.689 11:20:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.689 11:20:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.689 11:20:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.689 11:20:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.689 11:20:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.689 11:20:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.689 11:20:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.689 11:20:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.689 11:20:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.689 11:20:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.689 11:20:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:32:37.689 11:20:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:32:37.689 11:20:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.689 11:20:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.689 11:20:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:37.689 11:20:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.689 11:20:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:37.689 11:20:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.689 11:20:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.689 11:20:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.689 11:20:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.689 11:20:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.689 11:20:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.689 11:20:06 -- paths/export.sh@5 -- # export PATH 00:32:37.689 11:20:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.689 11:20:06 -- nvmf/common.sh@47 -- # : 0 00:32:37.689 11:20:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:37.689 11:20:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:37.689 11:20:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.689 11:20:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.689 11:20:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.689 11:20:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:37.689 11:20:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:37.689 11:20:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:37.689 11:20:06 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:37.689 11:20:06 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:37.689 11:20:06 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:37.689 11:20:06 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:32:37.689 11:20:06 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:37.689 11:20:06 -- host/timeout.sh@19 -- # nvmftestinit 00:32:37.689 11:20:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:37.689 11:20:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:37.689 11:20:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:37.689 11:20:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:37.689 11:20:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:37.689 11:20:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.689 11:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.689 11:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.689 11:20:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:32:37.689 11:20:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:32:37.689 11:20:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:32:37.689 11:20:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:32:37.689 11:20:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:32:37.689 11:20:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:32:37.689 11:20:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.689 11:20:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.689 11:20:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:37.689 11:20:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:37.689 11:20:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:37.689 11:20:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:37.689 11:20:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:37.689 11:20:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.689 11:20:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:37.689 11:20:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:37.689 11:20:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:37.689 11:20:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:37.689 11:20:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:37.947 11:20:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:37.947 Cannot find device "nvmf_tgt_br" 00:32:37.947 11:20:06 -- nvmf/common.sh@155 -- # true 00:32:37.947 11:20:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:37.947 Cannot find device "nvmf_tgt_br2" 00:32:37.947 11:20:06 -- nvmf/common.sh@156 -- # true 00:32:37.947 11:20:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:37.947 11:20:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:37.947 Cannot find device "nvmf_tgt_br" 00:32:37.947 11:20:06 -- nvmf/common.sh@158 -- # true 00:32:37.947 11:20:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:37.947 Cannot find device "nvmf_tgt_br2" 00:32:37.947 11:20:06 -- nvmf/common.sh@159 -- # true 00:32:37.947 11:20:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:37.947 11:20:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:37.947 11:20:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:37.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:37.947 11:20:06 -- nvmf/common.sh@162 -- # true 00:32:37.947 11:20:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:37.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:37.947 11:20:06 -- nvmf/common.sh@163 -- # true 00:32:37.947 11:20:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:37.947 11:20:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:37.947 11:20:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:37.947 11:20:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:37.947 11:20:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:37.947 11:20:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:37.947 11:20:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:37.947 11:20:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:37.947 11:20:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:37.947 11:20:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:37.947 11:20:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:37.947 11:20:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:37.947 11:20:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:37.947 11:20:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:37.947 11:20:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:37.947 11:20:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:38.205 11:20:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:38.205 11:20:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:38.205 11:20:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:38.205 11:20:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:38.205 11:20:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:38.205 11:20:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:38.205 11:20:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:38.205 11:20:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:38.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:32:38.205 00:32:38.205 --- 10.0.0.2 ping statistics --- 00:32:38.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.205 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:32:38.205 11:20:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:38.205 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:38.205 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:32:38.205 00:32:38.205 --- 10.0.0.3 ping statistics --- 00:32:38.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.205 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:32:38.205 11:20:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:38.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:32:38.205 00:32:38.205 --- 10.0.0.1 ping statistics --- 00:32:38.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.205 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:32:38.205 11:20:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.205 11:20:06 -- nvmf/common.sh@422 -- # return 0 00:32:38.205 11:20:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:32:38.205 11:20:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.205 11:20:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:38.205 11:20:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:38.205 11:20:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.205 11:20:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:38.205 11:20:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:38.205 11:20:06 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:32:38.205 11:20:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:38.205 11:20:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:38.205 11:20:06 -- common/autotest_common.sh@10 -- # set +x 00:32:38.205 11:20:06 -- nvmf/common.sh@470 -- # nvmfpid=107502 00:32:38.205 11:20:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:38.205 11:20:06 -- nvmf/common.sh@471 -- # waitforlisten 107502 00:32:38.205 11:20:06 -- common/autotest_common.sh@817 -- # '[' -z 107502 ']' 00:32:38.205 11:20:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.205 11:20:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:38.205 11:20:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.205 11:20:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:38.205 11:20:06 -- common/autotest_common.sh@10 -- # set +x 00:32:38.205 [2024-04-18 11:20:06.755734] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:32:38.205 [2024-04-18 11:20:06.755837] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.463 [2024-04-18 11:20:06.898116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:38.463 [2024-04-18 11:20:07.000181] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.463 [2024-04-18 11:20:07.000252] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.463 [2024-04-18 11:20:07.000266] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.463 [2024-04-18 11:20:07.000277] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.463 [2024-04-18 11:20:07.000286] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.463 [2024-04-18 11:20:07.000458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.463 [2024-04-18 11:20:07.000474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.393 11:20:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:39.393 11:20:07 -- common/autotest_common.sh@850 -- # return 0 00:32:39.393 11:20:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:39.393 11:20:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:39.393 11:20:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.393 11:20:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.393 11:20:07 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:39.393 11:20:07 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:39.393 [2024-04-18 11:20:08.012638] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.650 11:20:08 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:39.907 Malloc0 00:32:39.907 11:20:08 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.164 11:20:08 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.422 11:20:08 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.679 [2024-04-18 11:20:09.206625] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.679 11:20:09 -- host/timeout.sh@32 -- # bdevperf_pid=107599 00:32:40.679 11:20:09 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:32:40.679 11:20:09 -- host/timeout.sh@34 -- # waitforlisten 107599 /var/tmp/bdevperf.sock 00:32:40.679 11:20:09 -- common/autotest_common.sh@817 -- # '[' -z 107599 ']' 00:32:40.679 11:20:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:40.679 11:20:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:40.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:40.679 11:20:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:40.679 11:20:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:40.679 11:20:09 -- common/autotest_common.sh@10 -- # set +x 00:32:40.679 [2024-04-18 11:20:09.274545] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:32:40.679 [2024-04-18 11:20:09.274638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107599 ] 00:32:40.937 [2024-04-18 11:20:09.409828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.938 [2024-04-18 11:20:09.515792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:41.884 11:20:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:41.884 11:20:10 -- common/autotest_common.sh@850 -- # return 0 00:32:41.884 11:20:10 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:42.141 11:20:10 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:32:42.399 NVMe0n1 00:32:42.399 11:20:10 -- host/timeout.sh@51 -- # rpc_pid=107641 00:32:42.399 11:20:10 -- host/timeout.sh@53 -- # sleep 1 00:32:42.399 11:20:10 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:42.399 Running I/O for 10 seconds... 00:32:43.333 11:20:11 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.594 [2024-04-18 11:20:12.166176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.594 [2024-04-18 11:20:12.166357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c490 is same with the state(5) to be set 00:32:43.595 [2024-04-18 11:20:12.166961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.595 [2024-04-18 11:20:12.167435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.595 [2024-04-18 11:20:12.167444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.167983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.167994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.596 [2024-04-18 11:20:12.168222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.596 [2024-04-18 11:20:12.168242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.596 [2024-04-18 11:20:12.168262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.596 [2024-04-18 11:20:12.168284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.596 [2024-04-18 11:20:12.168296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.596 [2024-04-18 11:20:12.168305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.597 [2024-04-18 11:20:12.168747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.168986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.168995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.169007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.169016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.169028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.169049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.169061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.169070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.169087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.169097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.169109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.169119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.169130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.169139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.597 [2024-04-18 11:20:12.169151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.597 [2024-04-18 11:20:12.169160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.598 [2024-04-18 11:20:12.169732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657920 is same with the state(5) to be set 00:32:43.598 [2024-04-18 11:20:12.169756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:43.598 [2024-04-18 11:20:12.169765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:43.598 [2024-04-18 11:20:12.169774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:32:43.598 [2024-04-18 11:20:12.169784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.598 [2024-04-18 11:20:12.169850] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1657920 was disconnected and freed. reset controller. 00:32:43.598 [2024-04-18 11:20:12.170090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.598 [2024-04-18 11:20:12.170180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639bd0 (9): Bad file descriptor 00:32:43.598 [2024-04-18 11:20:12.170300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.598 [2024-04-18 11:20:12.170357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.598 [2024-04-18 11:20:12.170374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1639bd0 with addr=10.0.0.2, port=4420 00:32:43.598 [2024-04-18 11:20:12.170385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639bd0 is same with the state(5) to be set 00:32:43.598 [2024-04-18 11:20:12.170403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639bd0 (9): Bad file descriptor 00:32:43.598 [2024-04-18 11:20:12.170420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:43.598 [2024-04-18 11:20:12.170430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:43.598 [2024-04-18 11:20:12.170440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:43.598 [2024-04-18 11:20:12.170461] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:43.598 [2024-04-18 11:20:12.170472] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:43.598 11:20:12 -- host/timeout.sh@56 -- # sleep 2 00:32:46.125 [2024-04-18 11:20:14.170643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-04-18 11:20:14.170746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.125 [2024-04-18 11:20:14.170767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1639bd0 with addr=10.0.0.2, port=4420 00:32:46.125 [2024-04-18 11:20:14.170781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639bd0 is same with the state(5) to be set 00:32:46.126 [2024-04-18 11:20:14.170810] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639bd0 (9): Bad file descriptor 00:32:46.126 [2024-04-18 11:20:14.170829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.126 [2024-04-18 11:20:14.170839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.126 [2024-04-18 11:20:14.170850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.126 [2024-04-18 11:20:14.170878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.126 [2024-04-18 11:20:14.170891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.126 11:20:14 -- host/timeout.sh@57 -- # get_controller 00:32:46.126 11:20:14 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:46.126 11:20:14 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:32:46.126 11:20:14 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:32:46.126 11:20:14 -- host/timeout.sh@58 -- # get_bdev 00:32:46.126 11:20:14 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:32:46.126 11:20:14 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:32:46.126 11:20:14 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:32:46.126 11:20:14 -- host/timeout.sh@61 -- # sleep 5 00:32:47.776 [2024-04-18 11:20:16.171098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.776 [2024-04-18 11:20:16.171213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.776 [2024-04-18 11:20:16.171234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1639bd0 with addr=10.0.0.2, port=4420 00:32:47.776 [2024-04-18 11:20:16.171248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639bd0 is same with the state(5) to be set 00:32:47.776 [2024-04-18 11:20:16.171277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639bd0 (9): Bad file descriptor 00:32:47.776 [2024-04-18 11:20:16.171298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.776 [2024-04-18 11:20:16.171309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.776 [2024-04-18 11:20:16.171320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.776 [2024-04-18 11:20:16.171348] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.776 [2024-04-18 11:20:16.171362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.677 [2024-04-18 11:20:18.171460] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.610 00:32:50.610 Latency(us) 00:32:50.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.610 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:50.610 Verification LBA range: start 0x0 length 0x4000 00:32:50.610 NVMe0n1 : 8.14 1189.45 4.65 15.72 0.00 106016.13 2070.34 7015926.69 00:32:50.610 =================================================================================================================== 00:32:50.610 Total : 1189.45 4.65 15.72 0.00 106016.13 2070.34 7015926.69 00:32:50.610 0 00:32:51.176 11:20:19 -- host/timeout.sh@62 -- # get_controller 00:32:51.176 11:20:19 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:51.176 11:20:19 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:32:51.433 11:20:20 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:32:51.433 11:20:20 -- host/timeout.sh@63 -- # get_bdev 00:32:51.433 11:20:20 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:32:51.433 11:20:20 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:32:51.692 11:20:20 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:32:51.692 11:20:20 -- host/timeout.sh@65 -- # wait 107641 00:32:51.692 11:20:20 -- host/timeout.sh@67 -- # killprocess 107599 00:32:51.692 11:20:20 -- common/autotest_common.sh@936 -- # '[' -z 107599 ']' 00:32:51.692 11:20:20 -- common/autotest_common.sh@940 -- # kill -0 107599 00:32:51.692 11:20:20 -- common/autotest_common.sh@941 -- # uname 00:32:51.692 11:20:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:51.692 11:20:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 107599 00:32:51.692 11:20:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:32:51.692 11:20:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:32:51.692 killing process with pid 107599 00:32:51.692 11:20:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 107599' 00:32:51.692 11:20:20 -- common/autotest_common.sh@955 -- # kill 107599 00:32:51.692 11:20:20 -- common/autotest_common.sh@960 -- # wait 107599 00:32:51.692 Received shutdown signal, test time was about 9.241555 seconds 00:32:51.692 00:32:51.692 Latency(us) 00:32:51.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.692 =================================================================================================================== 00:32:51.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:51.948 11:20:20 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.205 [2024-04-18 11:20:20.746708] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.205 11:20:20 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:32:52.205 11:20:20 -- host/timeout.sh@74 -- # bdevperf_pid=107800 00:32:52.205 11:20:20 -- host/timeout.sh@76 -- # waitforlisten 107800 /var/tmp/bdevperf.sock 00:32:52.205 11:20:20 -- common/autotest_common.sh@817 -- # '[' -z 107800 ']' 00:32:52.205 11:20:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:52.205 11:20:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:52.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:52.205 11:20:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:52.205 11:20:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:52.205 11:20:20 -- common/autotest_common.sh@10 -- # set +x 00:32:52.205 [2024-04-18 11:20:20.819868] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:32:52.205 [2024-04-18 11:20:20.819977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107800 ] 00:32:52.462 [2024-04-18 11:20:20.961497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.462 [2024-04-18 11:20:21.059202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.395 11:20:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:53.395 11:20:21 -- common/autotest_common.sh@850 -- # return 0 00:32:53.395 11:20:21 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:53.679 11:20:22 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:32:53.958 NVMe0n1 00:32:53.958 11:20:22 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:53.958 11:20:22 -- host/timeout.sh@84 -- # rpc_pid=107849 00:32:53.958 11:20:22 -- host/timeout.sh@86 -- # sleep 1 00:32:53.958 Running I/O for 10 seconds... 00:32:54.893 11:20:23 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.154 [2024-04-18 11:20:23.644435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.154 [2024-04-18 11:20:23.644957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.644964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.644972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.644980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.644987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.644995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.645003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.645010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.645018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.645026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.645034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.645042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.645063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf65fe0 is same with the state(5) to be set 00:32:55.155 [2024-04-18 11:20:23.647272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.647982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.155 [2024-04-18 11:20:23.647991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.155 [2024-04-18 11:20:23.648003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.156 [2024-04-18 11:20:23.648824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.156 [2024-04-18 11:20:23.648835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.648855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.648875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.648900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.648920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.648941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.648961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.648981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.648991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.649012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.157 [2024-04-18 11:20:23.649041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81248 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81256 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81264 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81272 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81280 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81296 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81304 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81320 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81328 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81336 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81344 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81352 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.157 [2024-04-18 11:20:23.649624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.157 [2024-04-18 11:20:23.649631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.157 [2024-04-18 11:20:23.649638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:32:55.157 [2024-04-18 11:20:23.649647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81400 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81416 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81448 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.649973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.649981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.649990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.649997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81528 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.650297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.650306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.650313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.650337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81536 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.665387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.665425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.158 [2024-04-18 11:20:23.665435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.158 [2024-04-18 11:20:23.665445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81544 len:8 PRP1 0x0 PRP2 0x0 00:32:55.158 [2024-04-18 11:20:23.665454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.158 [2024-04-18 11:20:23.665464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81560 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81576 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.159 [2024-04-18 11:20:23.665769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.159 [2024-04-18 11:20:23.665776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:32:55.159 [2024-04-18 11:20:23.665784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.665856] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13fc800 was disconnected and freed. reset controller. 00:32:55.159 [2024-04-18 11:20:23.665972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.159 [2024-04-18 11:20:23.665989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.666001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.159 [2024-04-18 11:20:23.666010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.666020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.159 [2024-04-18 11:20:23.666043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.666059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.159 [2024-04-18 11:20:23.666068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.159 [2024-04-18 11:20:23.666078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:32:55.159 [2024-04-18 11:20:23.666292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.159 [2024-04-18 11:20:23.666325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:32:55.159 [2024-04-18 11:20:23.666446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.159 [2024-04-18 11:20:23.666500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.159 [2024-04-18 11:20:23.666533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13debd0 with addr=10.0.0.2, port=4420 00:32:55.159 [2024-04-18 11:20:23.666551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:32:55.159 [2024-04-18 11:20:23.666579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:32:55.159 [2024-04-18 11:20:23.666605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.159 [2024-04-18 11:20:23.666623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.159 [2024-04-18 11:20:23.666641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.159 [2024-04-18 11:20:23.666662] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.159 [2024-04-18 11:20:23.666673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.159 11:20:23 -- host/timeout.sh@90 -- # sleep 1 00:32:56.093 [2024-04-18 11:20:24.666816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.093 [2024-04-18 11:20:24.666941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.093 [2024-04-18 11:20:24.666962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13debd0 with addr=10.0.0.2, port=4420 00:32:56.093 [2024-04-18 11:20:24.666977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:32:56.093 [2024-04-18 11:20:24.667003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:32:56.093 [2024-04-18 11:20:24.667023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.093 [2024-04-18 11:20:24.667033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.093 [2024-04-18 11:20:24.667043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.093 [2024-04-18 11:20:24.667086] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.093 [2024-04-18 11:20:24.667099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.093 11:20:24 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.351 [2024-04-18 11:20:24.926395] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.351 11:20:24 -- host/timeout.sh@92 -- # wait 107849 00:32:57.286 [2024-04-18 11:20:25.681851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:05.398 00:33:05.398 Latency(us) 00:33:05.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.398 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:05.398 Verification LBA range: start 0x0 length 0x4000 00:33:05.398 NVMe0n1 : 10.01 6173.61 24.12 0.00 0.00 20696.33 1556.48 3050402.91 00:33:05.398 =================================================================================================================== 00:33:05.398 Total : 6173.61 24.12 0.00 0.00 20696.33 1556.48 3050402.91 00:33:05.398 0 00:33:05.398 11:20:32 -- host/timeout.sh@97 -- # rpc_pid=107966 00:33:05.398 11:20:32 -- host/timeout.sh@98 -- # sleep 1 00:33:05.398 11:20:32 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:05.398 Running I/O for 10 seconds... 00:33:05.398 11:20:33 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.398 [2024-04-18 11:20:33.736782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.736994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.398 [2024-04-18 11:20:33.737184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf64530 is same with the state(5) to be set 00:33:05.399 [2024-04-18 11:20:33.737847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.737888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.737910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.737922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.737935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.399 [2024-04-18 11:20:33.737945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.737957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.399 [2024-04-18 11:20:33.737966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.737977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.399 [2024-04-18 11:20:33.737987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.737998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.399 [2024-04-18 11:20:33.738481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.399 [2024-04-18 11:20:33.738501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.399 [2024-04-18 11:20:33.738522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.399 [2024-04-18 11:20:33.738543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.399 [2024-04-18 11:20:33.738554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.738987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.738998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.400 [2024-04-18 11:20:33.739322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.400 [2024-04-18 11:20:33.739333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.739506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.739984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.739996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.740005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.740016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.740026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.740046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.740057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.740068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.740077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.740088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.740097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.740108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.401 [2024-04-18 11:20:33.740126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.401 [2024-04-18 11:20:33.740137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.401 [2024-04-18 11:20:33.740147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.402 [2024-04-18 11:20:33.740167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.402 [2024-04-18 11:20:33.740188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.402 [2024-04-18 11:20:33.740208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.402 [2024-04-18 11:20:33.740228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.402 [2024-04-18 11:20:33.740248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.402 [2024-04-18 11:20:33.740268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76424 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76440 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.740757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76448 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.740766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.740775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.740782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.750448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76456 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.750518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.750545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.750558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.750571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76464 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.750584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.750597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.402 [2024-04-18 11:20:33.750607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.402 [2024-04-18 11:20:33.750619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76472 len:8 PRP1 0x0 PRP2 0x0 00:33:05.402 [2024-04-18 11:20:33.750632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.402 [2024-04-18 11:20:33.750723] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13fee20 was disconnected and freed. reset controller. 00:33:05.402 [2024-04-18 11:20:33.750893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.402 [2024-04-18 11:20:33.750930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.403 [2024-04-18 11:20:33.750949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.403 [2024-04-18 11:20:33.750962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.403 [2024-04-18 11:20:33.750977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.403 [2024-04-18 11:20:33.750990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.403 [2024-04-18 11:20:33.751004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.403 [2024-04-18 11:20:33.751018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.403 [2024-04-18 11:20:33.751050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:33:05.403 [2024-04-18 11:20:33.751406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.403 [2024-04-18 11:20:33.751447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:33:05.403 [2024-04-18 11:20:33.751583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.403 [2024-04-18 11:20:33.751669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.403 [2024-04-18 11:20:33.751707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13debd0 with addr=10.0.0.2, port=4420 00:33:05.403 [2024-04-18 11:20:33.751722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:33:05.403 [2024-04-18 11:20:33.751747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:33:05.403 [2024-04-18 11:20:33.751768] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.403 [2024-04-18 11:20:33.751782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.403 [2024-04-18 11:20:33.751796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.403 [2024-04-18 11:20:33.751823] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.403 [2024-04-18 11:20:33.751837] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.403 11:20:33 -- host/timeout.sh@101 -- # sleep 3 00:33:06.337 [2024-04-18 11:20:34.751982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.337 [2024-04-18 11:20:34.752100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.337 [2024-04-18 11:20:34.752121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13debd0 with addr=10.0.0.2, port=4420 00:33:06.337 [2024-04-18 11:20:34.752135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:33:06.337 [2024-04-18 11:20:34.752163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:33:06.337 [2024-04-18 11:20:34.752182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.337 [2024-04-18 11:20:34.752192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.337 [2024-04-18 11:20:34.752203] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.337 [2024-04-18 11:20:34.752231] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.337 [2024-04-18 11:20:34.752242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.330 [2024-04-18 11:20:35.752397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.330 [2024-04-18 11:20:35.752523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.330 [2024-04-18 11:20:35.752543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13debd0 with addr=10.0.0.2, port=4420 00:33:07.330 [2024-04-18 11:20:35.752557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:33:07.331 [2024-04-18 11:20:35.752585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:33:07.331 [2024-04-18 11:20:35.752604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.331 [2024-04-18 11:20:35.752614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.331 [2024-04-18 11:20:35.752625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.331 [2024-04-18 11:20:35.752654] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.331 [2024-04-18 11:20:35.752665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.275 [2024-04-18 11:20:36.756209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.275 [2024-04-18 11:20:36.756320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.275 [2024-04-18 11:20:36.756341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13debd0 with addr=10.0.0.2, port=4420 00:33:08.275 [2024-04-18 11:20:36.756355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13debd0 is same with the state(5) to be set 00:33:08.275 [2024-04-18 11:20:36.756620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13debd0 (9): Bad file descriptor 00:33:08.275 [2024-04-18 11:20:36.756876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.275 [2024-04-18 11:20:36.756899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.275 [2024-04-18 11:20:36.756910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.275 11:20:36 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:08.275 [2024-04-18 11:20:36.760721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.275 [2024-04-18 11:20:36.760747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.534 [2024-04-18 11:20:36.961946] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.534 11:20:36 -- host/timeout.sh@103 -- # wait 107966 00:33:09.468 [2024-04-18 11:20:37.799907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:14.734 00:33:14.734 Latency(us) 00:33:14.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.734 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:14.734 Verification LBA range: start 0x0 length 0x4000 00:33:14.734 NVMe0n1 : 10.01 5175.53 20.22 3715.20 0.00 14357.79 644.19 3019898.88 00:33:14.734 =================================================================================================================== 00:33:14.734 Total : 5175.53 20.22 3715.20 0.00 14357.79 0.00 3019898.88 00:33:14.734 0 00:33:14.734 11:20:42 -- host/timeout.sh@105 -- # killprocess 107800 00:33:14.734 11:20:42 -- common/autotest_common.sh@936 -- # '[' -z 107800 ']' 00:33:14.734 11:20:42 -- common/autotest_common.sh@940 -- # kill -0 107800 00:33:14.734 11:20:42 -- common/autotest_common.sh@941 -- # uname 00:33:14.734 11:20:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:14.734 11:20:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 107800 00:33:14.735 11:20:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:33:14.735 11:20:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:33:14.735 killing process with pid 107800 00:33:14.735 11:20:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 107800' 00:33:14.735 11:20:42 -- common/autotest_common.sh@955 -- # kill 107800 00:33:14.735 Received shutdown signal, test time was about 10.000000 seconds 00:33:14.735 00:33:14.735 Latency(us) 00:33:14.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.735 =================================================================================================================== 00:33:14.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:14.735 11:20:42 -- common/autotest_common.sh@960 -- # wait 107800 00:33:14.735 11:20:42 -- host/timeout.sh@110 -- # bdevperf_pid=108087 00:33:14.735 11:20:42 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:33:14.735 11:20:42 -- host/timeout.sh@112 -- # waitforlisten 108087 /var/tmp/bdevperf.sock 00:33:14.735 11:20:42 -- common/autotest_common.sh@817 -- # '[' -z 108087 ']' 00:33:14.735 11:20:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:14.735 11:20:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:14.735 11:20:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:14.735 11:20:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:14.735 11:20:42 -- common/autotest_common.sh@10 -- # set +x 00:33:14.735 [2024-04-18 11:20:42.940124] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:14.735 [2024-04-18 11:20:42.940781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108087 ] 00:33:14.735 [2024-04-18 11:20:43.080343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.735 [2024-04-18 11:20:43.172332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.322 11:20:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:15.322 11:20:43 -- common/autotest_common.sh@850 -- # return 0 00:33:15.322 11:20:43 -- host/timeout.sh@116 -- # dtrace_pid=108114 00:33:15.322 11:20:43 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 108087 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:33:15.322 11:20:43 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:33:15.580 11:20:44 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:33:15.836 NVMe0n1 00:33:15.836 11:20:44 -- host/timeout.sh@124 -- # rpc_pid=108163 00:33:15.836 11:20:44 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:15.836 11:20:44 -- host/timeout.sh@125 -- # sleep 1 00:33:16.093 Running I/O for 10 seconds... 00:33:17.028 11:20:45 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.028 [2024-04-18 11:20:45.612123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.028 [2024-04-18 11:20:45.612362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67160 is same with the state(5) to be set 00:33:17.029 [2024-04-18 11:20:45.612796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.612828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.612851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.612862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.612874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.612884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.612904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.612914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.612926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.612935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.612947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.612958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.612969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.612979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.612991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.029 [2024-04-18 11:20:45.613472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.029 [2024-04-18 11:20:45.613481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.613983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.613992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.030 [2024-04-18 11:20:45.614348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.030 [2024-04-18 11:20:45.614357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.614985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.614996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.031 [2024-04-18 11:20:45.615242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.031 [2024-04-18 11:20:45.615251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.032 [2024-04-18 11:20:45.615623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603920 is same with the state(5) to be set 00:33:17.032 [2024-04-18 11:20:45.615647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:17.032 [2024-04-18 11:20:45.615655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:17.032 [2024-04-18 11:20:45.615663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63520 len:8 PRP1 0x0 PRP2 0x0 00:33:17.032 [2024-04-18 11:20:45.615672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615732] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1603920 was disconnected and freed. reset controller. 00:33:17.032 [2024-04-18 11:20:45.615855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.032 [2024-04-18 11:20:45.615874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.032 [2024-04-18 11:20:45.615895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.032 [2024-04-18 11:20:45.615913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.032 [2024-04-18 11:20:45.615932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.032 [2024-04-18 11:20:45.615941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e5bd0 is same with the state(5) to be set 00:33:17.032 [2024-04-18 11:20:45.616247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:17.032 [2024-04-18 11:20:45.616283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e5bd0 (9): Bad file descriptor 00:33:17.032 [2024-04-18 11:20:45.616412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.032 [2024-04-18 11:20:45.616651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.032 [2024-04-18 11:20:45.616713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e5bd0 with addr=10.0.0.2, port=4420 00:33:17.032 [2024-04-18 11:20:45.616897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e5bd0 is same with the state(5) to be set 00:33:17.032 [2024-04-18 11:20:45.616969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e5bd0 (9): Bad file descriptor 00:33:17.032 [2024-04-18 11:20:45.617121] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:17.032 [2024-04-18 11:20:45.617178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:17.032 [2024-04-18 11:20:45.617231] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:17.032 [2024-04-18 11:20:45.617357] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:17.032 [2024-04-18 11:20:45.617409] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:17.032 11:20:45 -- host/timeout.sh@128 -- # wait 108163 00:33:19.568 [2024-04-18 11:20:47.617774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.569 [2024-04-18 11:20:47.618132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.569 [2024-04-18 11:20:47.618304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e5bd0 with addr=10.0.0.2, port=4420 00:33:19.569 [2024-04-18 11:20:47.618537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e5bd0 is same with the state(5) to be set 00:33:19.569 [2024-04-18 11:20:47.618579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e5bd0 (9): Bad file descriptor 00:33:19.569 [2024-04-18 11:20:47.618599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:19.569 [2024-04-18 11:20:47.618609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:19.569 [2024-04-18 11:20:47.618620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:19.569 [2024-04-18 11:20:47.618649] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.569 [2024-04-18 11:20:47.618661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.472 [2024-04-18 11:20:49.618851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.472 [2024-04-18 11:20:49.618959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.472 [2024-04-18 11:20:49.618980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e5bd0 with addr=10.0.0.2, port=4420 00:33:21.472 [2024-04-18 11:20:49.618997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e5bd0 is same with the state(5) to be set 00:33:21.472 [2024-04-18 11:20:49.619025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e5bd0 (9): Bad file descriptor 00:33:21.472 [2024-04-18 11:20:49.619057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.472 [2024-04-18 11:20:49.619069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.472 [2024-04-18 11:20:49.619080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.472 [2024-04-18 11:20:49.619114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.472 [2024-04-18 11:20:49.619125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.369 [2024-04-18 11:20:51.619223] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.302 00:33:24.302 Latency(us) 00:33:24.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.302 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:33:24.302 NVMe0n1 : 8.11 2350.48 9.18 15.78 0.00 54023.59 2427.81 7015926.69 00:33:24.302 =================================================================================================================== 00:33:24.302 Total : 2350.48 9.18 15.78 0.00 54023.59 2427.81 7015926.69 00:33:24.302 0 00:33:24.302 11:20:52 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:24.302 Attaching 5 probes... 00:33:24.302 1219.892268: reset bdev controller NVMe0 00:33:24.302 1219.987595: reconnect bdev controller NVMe0 00:33:24.302 3221.250816: reconnect delay bdev controller NVMe0 00:33:24.302 3221.275886: reconnect bdev controller NVMe0 00:33:24.302 5222.358098: reconnect delay bdev controller NVMe0 00:33:24.302 5222.381074: reconnect bdev controller NVMe0 00:33:24.302 7222.824850: reconnect delay bdev controller NVMe0 00:33:24.302 7222.857231: reconnect bdev controller NVMe0 00:33:24.302 11:20:52 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:33:24.303 11:20:52 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:33:24.303 11:20:52 -- host/timeout.sh@136 -- # kill 108114 00:33:24.303 11:20:52 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:24.303 11:20:52 -- host/timeout.sh@139 -- # killprocess 108087 00:33:24.303 11:20:52 -- common/autotest_common.sh@936 -- # '[' -z 108087 ']' 00:33:24.303 11:20:52 -- common/autotest_common.sh@940 -- # kill -0 108087 00:33:24.303 11:20:52 -- common/autotest_common.sh@941 -- # uname 00:33:24.303 11:20:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:24.303 11:20:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108087 00:33:24.303 killing process with pid 108087 00:33:24.303 Received shutdown signal, test time was about 8.170517 seconds 00:33:24.303 00:33:24.303 Latency(us) 00:33:24.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.303 =================================================================================================================== 00:33:24.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.303 11:20:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:33:24.303 11:20:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:33:24.303 11:20:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108087' 00:33:24.303 11:20:52 -- common/autotest_common.sh@955 -- # kill 108087 00:33:24.303 11:20:52 -- common/autotest_common.sh@960 -- # wait 108087 00:33:24.303 11:20:52 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.868 11:20:53 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:33:24.868 11:20:53 -- host/timeout.sh@145 -- # nvmftestfini 00:33:24.868 11:20:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:24.868 11:20:53 -- nvmf/common.sh@117 -- # sync 00:33:24.868 11:20:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.868 11:20:53 -- nvmf/common.sh@120 -- # set +e 00:33:24.868 11:20:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.868 11:20:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.868 rmmod nvme_tcp 00:33:24.868 rmmod nvme_fabrics 00:33:24.868 rmmod nvme_keyring 00:33:24.868 11:20:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.868 11:20:53 -- nvmf/common.sh@124 -- # set -e 00:33:24.868 11:20:53 -- nvmf/common.sh@125 -- # return 0 00:33:24.868 11:20:53 -- nvmf/common.sh@478 -- # '[' -n 107502 ']' 00:33:24.868 11:20:53 -- nvmf/common.sh@479 -- # killprocess 107502 00:33:24.868 11:20:53 -- common/autotest_common.sh@936 -- # '[' -z 107502 ']' 00:33:24.868 11:20:53 -- common/autotest_common.sh@940 -- # kill -0 107502 00:33:24.868 11:20:53 -- common/autotest_common.sh@941 -- # uname 00:33:24.868 11:20:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:24.868 11:20:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 107502 00:33:24.868 killing process with pid 107502 00:33:24.868 11:20:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:24.868 11:20:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:24.868 11:20:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 107502' 00:33:24.868 11:20:53 -- common/autotest_common.sh@955 -- # kill 107502 00:33:24.868 11:20:53 -- common/autotest_common.sh@960 -- # wait 107502 00:33:25.127 11:20:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:33:25.127 11:20:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:25.127 11:20:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:25.127 11:20:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:25.127 11:20:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:25.127 11:20:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.127 11:20:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.127 11:20:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.127 11:20:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:25.127 ************************************ 00:33:25.127 END TEST nvmf_timeout 00:33:25.127 ************************************ 00:33:25.127 00:33:25.127 real 0m47.428s 00:33:25.127 user 2m19.741s 00:33:25.127 sys 0m4.957s 00:33:25.127 11:20:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:25.127 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:33:25.127 11:20:53 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:33:25.127 11:20:53 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:33:25.127 11:20:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:25.127 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:33:25.127 11:20:53 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:33:25.127 ************************************ 00:33:25.127 END TEST nvmf_tcp 00:33:25.127 ************************************ 00:33:25.127 00:33:25.127 real 18m14.097s 00:33:25.127 user 55m31.357s 00:33:25.127 sys 3m50.305s 00:33:25.127 11:20:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:25.127 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:33:25.127 11:20:53 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:33:25.127 11:20:53 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:25.127 11:20:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:25.127 11:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:25.127 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:33:25.386 ************************************ 00:33:25.386 START TEST spdkcli_nvmf_tcp 00:33:25.386 ************************************ 00:33:25.386 11:20:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:25.386 * Looking for test storage... 00:33:25.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:25.386 11:20:53 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:25.386 11:20:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:25.386 11:20:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:25.386 11:20:53 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:25.386 11:20:53 -- nvmf/common.sh@7 -- # uname -s 00:33:25.386 11:20:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.386 11:20:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.386 11:20:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.386 11:20:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.386 11:20:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.386 11:20:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.386 11:20:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.386 11:20:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.386 11:20:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.386 11:20:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.386 11:20:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:33:25.386 11:20:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:33:25.386 11:20:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.386 11:20:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.386 11:20:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:25.386 11:20:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.386 11:20:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:25.386 11:20:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.386 11:20:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.386 11:20:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.387 11:20:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.387 11:20:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.387 11:20:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.387 11:20:53 -- paths/export.sh@5 -- # export PATH 00:33:25.387 11:20:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.387 11:20:53 -- nvmf/common.sh@47 -- # : 0 00:33:25.387 11:20:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:25.387 11:20:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:25.387 11:20:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.387 11:20:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.387 11:20:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.387 11:20:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:25.387 11:20:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:25.387 11:20:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:25.387 11:20:53 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:25.387 11:20:53 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:25.387 11:20:53 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:25.387 11:20:53 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:25.387 11:20:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:25.387 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:33:25.387 11:20:53 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:25.387 11:20:53 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108387 00:33:25.387 11:20:53 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:25.387 11:20:53 -- spdkcli/common.sh@34 -- # waitforlisten 108387 00:33:25.387 11:20:53 -- common/autotest_common.sh@817 -- # '[' -z 108387 ']' 00:33:25.387 11:20:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.387 11:20:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:25.387 11:20:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.387 11:20:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:25.387 11:20:53 -- common/autotest_common.sh@10 -- # set +x 00:33:25.387 [2024-04-18 11:20:53.986366] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:25.387 [2024-04-18 11:20:53.986668] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108387 ] 00:33:25.644 [2024-04-18 11:20:54.123160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:25.645 [2024-04-18 11:20:54.230712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.645 [2024-04-18 11:20:54.230722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.578 11:20:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:26.578 11:20:55 -- common/autotest_common.sh@850 -- # return 0 00:33:26.578 11:20:55 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:26.578 11:20:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:26.578 11:20:55 -- common/autotest_common.sh@10 -- # set +x 00:33:26.578 11:20:55 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:26.578 11:20:55 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:26.578 11:20:55 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:26.578 11:20:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:26.578 11:20:55 -- common/autotest_common.sh@10 -- # set +x 00:33:26.578 11:20:55 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:26.578 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:26.578 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:26.578 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:26.578 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:26.578 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:26.578 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:26.578 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:26.578 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:26.578 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:26.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:26.578 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:26.578 ' 00:33:27.144 [2024-04-18 11:20:55.511466] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:29.674 [2024-04-18 11:20:57.733378] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.607 [2024-04-18 11:20:59.006482] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:33.156 [2024-04-18 11:21:01.352106] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:35.056 [2024-04-18 11:21:03.393587] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:36.428 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:36.428 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:36.428 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:36.428 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:36.428 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:36.428 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:36.428 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:36.428 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:36.428 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:36.428 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:36.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:36.428 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:36.428 11:21:05 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:36.428 11:21:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:36.428 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:33:36.687 11:21:05 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:36.687 11:21:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:36.687 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:33:36.687 11:21:05 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:36.687 11:21:05 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:33:36.945 11:21:05 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:37.202 11:21:05 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:37.202 11:21:05 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:37.202 11:21:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:37.202 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:33:37.202 11:21:05 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:37.202 11:21:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:37.202 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:33:37.202 11:21:05 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:37.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:37.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:37.202 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:37.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:37.203 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:37.203 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:37.203 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:37.203 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:37.203 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:37.203 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:37.203 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:37.203 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:37.203 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:37.203 ' 00:33:42.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:42.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:42.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:42.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:42.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:42.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:42.490 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:42.490 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:42.490 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:42.490 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:42.490 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:42.490 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:42.490 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:42.490 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:42.490 11:21:11 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:42.490 11:21:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:42.490 11:21:11 -- common/autotest_common.sh@10 -- # set +x 00:33:42.749 11:21:11 -- spdkcli/nvmf.sh@90 -- # killprocess 108387 00:33:42.749 11:21:11 -- common/autotest_common.sh@936 -- # '[' -z 108387 ']' 00:33:42.749 11:21:11 -- common/autotest_common.sh@940 -- # kill -0 108387 00:33:42.749 11:21:11 -- common/autotest_common.sh@941 -- # uname 00:33:42.749 11:21:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:42.749 11:21:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108387 00:33:42.749 11:21:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:42.749 11:21:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:42.749 11:21:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108387' 00:33:42.749 killing process with pid 108387 00:33:42.749 11:21:11 -- common/autotest_common.sh@955 -- # kill 108387 00:33:42.749 [2024-04-18 11:21:11.168455] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:42.749 11:21:11 -- common/autotest_common.sh@960 -- # wait 108387 00:33:42.749 11:21:11 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:42.749 11:21:11 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:42.749 11:21:11 -- spdkcli/common.sh@13 -- # '[' -n 108387 ']' 00:33:42.749 11:21:11 -- spdkcli/common.sh@14 -- # killprocess 108387 00:33:42.749 11:21:11 -- common/autotest_common.sh@936 -- # '[' -z 108387 ']' 00:33:42.749 11:21:11 -- common/autotest_common.sh@940 -- # kill -0 108387 00:33:42.749 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (108387) - No such process 00:33:42.749 11:21:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 108387 is not found' 00:33:42.749 Process with pid 108387 is not found 00:33:42.749 11:21:11 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:42.749 11:21:11 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:42.749 11:21:11 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:42.749 00:33:42.749 real 0m17.558s 00:33:42.749 user 0m37.929s 00:33:42.749 sys 0m0.921s 00:33:42.749 11:21:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:42.749 ************************************ 00:33:42.749 END TEST spdkcli_nvmf_tcp 00:33:42.749 11:21:11 -- common/autotest_common.sh@10 -- # set +x 00:33:42.749 ************************************ 00:33:43.009 11:21:11 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:43.009 11:21:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:43.009 11:21:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:43.009 11:21:11 -- common/autotest_common.sh@10 -- # set +x 00:33:43.009 ************************************ 00:33:43.009 START TEST nvmf_identify_passthru 00:33:43.009 ************************************ 00:33:43.009 11:21:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:43.009 * Looking for test storage... 00:33:43.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:43.009 11:21:11 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:43.009 11:21:11 -- nvmf/common.sh@7 -- # uname -s 00:33:43.009 11:21:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.009 11:21:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.009 11:21:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.009 11:21:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.009 11:21:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.009 11:21:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.009 11:21:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.009 11:21:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.009 11:21:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.009 11:21:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.009 11:21:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:33:43.009 11:21:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:33:43.009 11:21:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.009 11:21:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.009 11:21:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:43.009 11:21:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.009 11:21:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:43.009 11:21:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.009 11:21:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.009 11:21:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.009 11:21:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- paths/export.sh@5 -- # export PATH 00:33:43.009 11:21:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- nvmf/common.sh@47 -- # : 0 00:33:43.009 11:21:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:43.009 11:21:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:43.009 11:21:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.009 11:21:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.009 11:21:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.009 11:21:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:43.009 11:21:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:43.009 11:21:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:43.009 11:21:11 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:43.009 11:21:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.009 11:21:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.009 11:21:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.009 11:21:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- paths/export.sh@5 -- # export PATH 00:33:43.009 11:21:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.009 11:21:11 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:43.009 11:21:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:43.009 11:21:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.009 11:21:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:43.009 11:21:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:43.009 11:21:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:43.009 11:21:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.009 11:21:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:43.009 11:21:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.009 11:21:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:33:43.009 11:21:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:33:43.009 11:21:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:33:43.009 11:21:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:33:43.009 11:21:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:33:43.009 11:21:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:33:43.009 11:21:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.009 11:21:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.009 11:21:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:43.009 11:21:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:43.009 11:21:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:43.009 11:21:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:43.009 11:21:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:43.009 11:21:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.009 11:21:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:43.009 11:21:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:43.009 11:21:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:43.009 11:21:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:43.009 11:21:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:43.009 11:21:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:43.009 Cannot find device "nvmf_tgt_br" 00:33:43.009 11:21:11 -- nvmf/common.sh@155 -- # true 00:33:43.009 11:21:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:43.267 Cannot find device "nvmf_tgt_br2" 00:33:43.267 11:21:11 -- nvmf/common.sh@156 -- # true 00:33:43.267 11:21:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:43.267 11:21:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:43.267 Cannot find device "nvmf_tgt_br" 00:33:43.267 11:21:11 -- nvmf/common.sh@158 -- # true 00:33:43.267 11:21:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:43.267 Cannot find device "nvmf_tgt_br2" 00:33:43.267 11:21:11 -- nvmf/common.sh@159 -- # true 00:33:43.267 11:21:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:43.267 11:21:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:43.267 11:21:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:43.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:43.267 11:21:11 -- nvmf/common.sh@162 -- # true 00:33:43.267 11:21:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:43.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:43.267 11:21:11 -- nvmf/common.sh@163 -- # true 00:33:43.267 11:21:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:43.267 11:21:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:43.267 11:21:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:43.267 11:21:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:43.267 11:21:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:43.267 11:21:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:43.267 11:21:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:43.267 11:21:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:43.267 11:21:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:43.267 11:21:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:43.267 11:21:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:43.267 11:21:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:43.267 11:21:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:43.267 11:21:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:43.267 11:21:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:43.267 11:21:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:43.267 11:21:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:43.267 11:21:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:43.267 11:21:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:43.267 11:21:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:43.526 11:21:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:43.526 11:21:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:43.526 11:21:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:43.526 11:21:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:43.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:33:43.526 00:33:43.526 --- 10.0.0.2 ping statistics --- 00:33:43.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.526 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:33:43.526 11:21:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:43.526 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:43.526 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:33:43.526 00:33:43.526 --- 10.0.0.3 ping statistics --- 00:33:43.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.526 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:33:43.526 11:21:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:43.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:33:43.526 00:33:43.526 --- 10.0.0.1 ping statistics --- 00:33:43.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.526 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:33:43.526 11:21:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.526 11:21:11 -- nvmf/common.sh@422 -- # return 0 00:33:43.526 11:21:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:33:43.526 11:21:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.526 11:21:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:43.526 11:21:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:43.526 11:21:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.526 11:21:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:43.526 11:21:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:43.526 11:21:11 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:43.526 11:21:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:43.526 11:21:11 -- common/autotest_common.sh@10 -- # set +x 00:33:43.526 11:21:11 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:43.526 11:21:11 -- common/autotest_common.sh@1510 -- # bdfs=() 00:33:43.526 11:21:11 -- common/autotest_common.sh@1510 -- # local bdfs 00:33:43.526 11:21:11 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:33:43.526 11:21:11 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:33:43.526 11:21:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:33:43.526 11:21:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:33:43.526 11:21:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:43.526 11:21:11 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:43.526 11:21:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:33:43.526 11:21:12 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:33:43.526 11:21:12 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:33:43.526 11:21:12 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:33:43.526 11:21:12 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:33:43.526 11:21:12 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:33:43.526 11:21:12 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:33:43.526 11:21:12 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:43.526 11:21:12 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:43.793 11:21:12 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:33:43.793 11:21:12 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:43.793 11:21:12 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:33:43.793 11:21:12 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:43.793 11:21:12 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:33:43.793 11:21:12 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:43.793 11:21:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:43.793 11:21:12 -- common/autotest_common.sh@10 -- # set +x 00:33:43.793 11:21:12 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:43.793 11:21:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:43.793 11:21:12 -- common/autotest_common.sh@10 -- # set +x 00:33:43.793 11:21:12 -- target/identify_passthru.sh@31 -- # nvmfpid=108894 00:33:43.793 11:21:12 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:43.793 11:21:12 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:43.793 11:21:12 -- target/identify_passthru.sh@35 -- # waitforlisten 108894 00:33:43.793 11:21:12 -- common/autotest_common.sh@817 -- # '[' -z 108894 ']' 00:33:43.793 11:21:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.793 11:21:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:43.793 11:21:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.793 11:21:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:43.793 11:21:12 -- common/autotest_common.sh@10 -- # set +x 00:33:44.066 [2024-04-18 11:21:12.477456] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:44.066 [2024-04-18 11:21:12.477556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.066 [2024-04-18 11:21:12.622951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:44.325 [2024-04-18 11:21:12.728345] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.325 [2024-04-18 11:21:12.728701] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.325 [2024-04-18 11:21:12.728805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.325 [2024-04-18 11:21:12.728915] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.325 [2024-04-18 11:21:12.729007] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.325 [2024-04-18 11:21:12.729302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.325 [2024-04-18 11:21:12.729372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:44.325 [2024-04-18 11:21:12.729455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:44.325 [2024-04-18 11:21:12.729462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.890 11:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:44.890 11:21:13 -- common/autotest_common.sh@850 -- # return 0 00:33:44.890 11:21:13 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:44.890 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:44.890 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:44.890 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:44.890 11:21:13 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:44.890 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:44.890 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 [2024-04-18 11:21:13.597278] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:45.147 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.147 11:21:13 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:45.147 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:45.147 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 [2024-04-18 11:21:13.611740] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.147 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.147 11:21:13 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:45.147 11:21:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:45.147 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 11:21:13 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:33:45.147 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:45.147 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 Nvme0n1 00:33:45.147 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.147 11:21:13 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:45.147 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:45.147 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.147 11:21:13 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:45.147 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:45.147 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.147 11:21:13 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.147 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:45.147 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 [2024-04-18 11:21:13.744810] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.147 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.147 11:21:13 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:45.147 11:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:45.147 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.147 [2024-04-18 11:21:13.752568] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:45.147 [ 00:33:45.147 { 00:33:45.147 "allow_any_host": true, 00:33:45.147 "hosts": [], 00:33:45.147 "listen_addresses": [], 00:33:45.147 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:45.147 "subtype": "Discovery" 00:33:45.147 }, 00:33:45.147 { 00:33:45.147 "allow_any_host": true, 00:33:45.147 "hosts": [], 00:33:45.147 "listen_addresses": [ 00:33:45.147 { 00:33:45.147 "adrfam": "IPv4", 00:33:45.147 "traddr": "10.0.0.2", 00:33:45.147 "transport": "TCP", 00:33:45.147 "trsvcid": "4420", 00:33:45.147 "trtype": "TCP" 00:33:45.147 } 00:33:45.147 ], 00:33:45.147 "max_cntlid": 65519, 00:33:45.147 "max_namespaces": 1, 00:33:45.147 "min_cntlid": 1, 00:33:45.147 "model_number": "SPDK bdev Controller", 00:33:45.147 "namespaces": [ 00:33:45.147 { 00:33:45.147 "bdev_name": "Nvme0n1", 00:33:45.147 "name": "Nvme0n1", 00:33:45.147 "nguid": "A95296EF297B4C979C797BB5F1103564", 00:33:45.147 "nsid": 1, 00:33:45.147 "uuid": "a95296ef-297b-4c97-9c79-7bb5f1103564" 00:33:45.147 } 00:33:45.147 ], 00:33:45.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:45.147 "serial_number": "SPDK00000000000001", 00:33:45.147 "subtype": "NVMe" 00:33:45.147 } 00:33:45.147 ] 00:33:45.147 11:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.147 11:21:13 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:45.147 11:21:13 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:45.147 11:21:13 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:45.404 11:21:13 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:33:45.404 11:21:13 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:45.404 11:21:13 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:45.404 11:21:13 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:45.662 11:21:14 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:33:45.662 11:21:14 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:33:45.662 11:21:14 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:33:45.662 11:21:14 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:45.662 11:21:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:45.662 11:21:14 -- common/autotest_common.sh@10 -- # set +x 00:33:45.662 11:21:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:45.662 11:21:14 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:45.662 11:21:14 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:45.662 11:21:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:45.662 11:21:14 -- nvmf/common.sh@117 -- # sync 00:33:45.662 11:21:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:45.662 11:21:14 -- nvmf/common.sh@120 -- # set +e 00:33:45.662 11:21:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.662 11:21:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:45.662 rmmod nvme_tcp 00:33:45.662 rmmod nvme_fabrics 00:33:45.662 rmmod nvme_keyring 00:33:45.662 11:21:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.662 11:21:14 -- nvmf/common.sh@124 -- # set -e 00:33:45.662 11:21:14 -- nvmf/common.sh@125 -- # return 0 00:33:45.662 11:21:14 -- nvmf/common.sh@478 -- # '[' -n 108894 ']' 00:33:45.662 11:21:14 -- nvmf/common.sh@479 -- # killprocess 108894 00:33:45.662 11:21:14 -- common/autotest_common.sh@936 -- # '[' -z 108894 ']' 00:33:45.662 11:21:14 -- common/autotest_common.sh@940 -- # kill -0 108894 00:33:45.662 11:21:14 -- common/autotest_common.sh@941 -- # uname 00:33:45.921 11:21:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:45.921 11:21:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108894 00:33:45.921 killing process with pid 108894 00:33:45.921 11:21:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:45.921 11:21:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:45.921 11:21:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108894' 00:33:45.921 11:21:14 -- common/autotest_common.sh@955 -- # kill 108894 00:33:45.921 [2024-04-18 11:21:14.321517] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:45.921 11:21:14 -- common/autotest_common.sh@960 -- # wait 108894 00:33:45.921 11:21:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:33:45.921 11:21:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:45.921 11:21:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:45.921 11:21:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:45.921 11:21:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:45.921 11:21:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.921 11:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:45.921 11:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.180 11:21:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:46.180 ************************************ 00:33:46.180 END TEST nvmf_identify_passthru 00:33:46.180 ************************************ 00:33:46.180 00:33:46.180 real 0m3.068s 00:33:46.180 user 0m7.648s 00:33:46.180 sys 0m0.773s 00:33:46.180 11:21:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:46.180 11:21:14 -- common/autotest_common.sh@10 -- # set +x 00:33:46.180 11:21:14 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:33:46.180 11:21:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:46.180 11:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:46.180 11:21:14 -- common/autotest_common.sh@10 -- # set +x 00:33:46.180 ************************************ 00:33:46.180 START TEST nvmf_dif 00:33:46.180 ************************************ 00:33:46.180 11:21:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:33:46.180 * Looking for test storage... 00:33:46.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:46.180 11:21:14 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:46.180 11:21:14 -- nvmf/common.sh@7 -- # uname -s 00:33:46.180 11:21:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.180 11:21:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.180 11:21:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.180 11:21:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.180 11:21:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.180 11:21:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.180 11:21:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.180 11:21:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.180 11:21:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.180 11:21:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.180 11:21:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:33:46.180 11:21:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:33:46.180 11:21:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.180 11:21:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.180 11:21:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:46.180 11:21:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.180 11:21:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:46.180 11:21:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.180 11:21:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.180 11:21:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.180 11:21:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.180 11:21:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.180 11:21:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.180 11:21:14 -- paths/export.sh@5 -- # export PATH 00:33:46.181 11:21:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.181 11:21:14 -- nvmf/common.sh@47 -- # : 0 00:33:46.181 11:21:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:46.181 11:21:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:46.181 11:21:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.181 11:21:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.181 11:21:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.181 11:21:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:46.181 11:21:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:46.181 11:21:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:46.181 11:21:14 -- target/dif.sh@15 -- # NULL_META=16 00:33:46.181 11:21:14 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:46.181 11:21:14 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:46.181 11:21:14 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:46.181 11:21:14 -- target/dif.sh@135 -- # nvmftestinit 00:33:46.181 11:21:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:46.181 11:21:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.181 11:21:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:46.181 11:21:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:46.181 11:21:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:46.181 11:21:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.181 11:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:46.181 11:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.181 11:21:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:33:46.181 11:21:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:33:46.181 11:21:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:33:46.181 11:21:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:33:46.181 11:21:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:33:46.181 11:21:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:33:46.181 11:21:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.181 11:21:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.181 11:21:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:46.181 11:21:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:46.181 11:21:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:46.181 11:21:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:46.181 11:21:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:46.181 11:21:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.181 11:21:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:46.181 11:21:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:46.181 11:21:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:46.181 11:21:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:46.181 11:21:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:46.181 11:21:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:46.439 Cannot find device "nvmf_tgt_br" 00:33:46.439 11:21:14 -- nvmf/common.sh@155 -- # true 00:33:46.439 11:21:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:46.439 Cannot find device "nvmf_tgt_br2" 00:33:46.439 11:21:14 -- nvmf/common.sh@156 -- # true 00:33:46.439 11:21:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:46.439 11:21:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:46.439 Cannot find device "nvmf_tgt_br" 00:33:46.439 11:21:14 -- nvmf/common.sh@158 -- # true 00:33:46.439 11:21:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:46.439 Cannot find device "nvmf_tgt_br2" 00:33:46.439 11:21:14 -- nvmf/common.sh@159 -- # true 00:33:46.439 11:21:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:46.439 11:21:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:46.439 11:21:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:46.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:46.439 11:21:14 -- nvmf/common.sh@162 -- # true 00:33:46.439 11:21:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:46.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:46.439 11:21:14 -- nvmf/common.sh@163 -- # true 00:33:46.439 11:21:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:46.439 11:21:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:46.439 11:21:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:46.439 11:21:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:46.439 11:21:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:46.439 11:21:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:46.439 11:21:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:46.439 11:21:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:46.439 11:21:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:46.439 11:21:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:46.439 11:21:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:46.439 11:21:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:46.439 11:21:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:46.439 11:21:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:46.439 11:21:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:46.439 11:21:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:46.439 11:21:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:46.439 11:21:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:46.439 11:21:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:46.439 11:21:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:46.439 11:21:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:46.439 11:21:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:46.697 11:21:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:46.697 11:21:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:46.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:33:46.697 00:33:46.697 --- 10.0.0.2 ping statistics --- 00:33:46.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.697 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:33:46.697 11:21:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:46.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:46.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:33:46.697 00:33:46.697 --- 10.0.0.3 ping statistics --- 00:33:46.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.697 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:33:46.697 11:21:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:46.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:33:46.697 00:33:46.697 --- 10.0.0.1 ping statistics --- 00:33:46.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.697 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:33:46.697 11:21:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.697 11:21:15 -- nvmf/common.sh@422 -- # return 0 00:33:46.697 11:21:15 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:33:46.697 11:21:15 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:46.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:46.955 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:46.955 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:46.955 11:21:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.955 11:21:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:46.955 11:21:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:46.955 11:21:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.955 11:21:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:46.955 11:21:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:46.955 11:21:15 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:46.955 11:21:15 -- target/dif.sh@137 -- # nvmfappstart 00:33:46.955 11:21:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:46.955 11:21:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:46.955 11:21:15 -- common/autotest_common.sh@10 -- # set +x 00:33:46.955 11:21:15 -- nvmf/common.sh@470 -- # nvmfpid=109242 00:33:46.955 11:21:15 -- nvmf/common.sh@471 -- # waitforlisten 109242 00:33:46.955 11:21:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:46.955 11:21:15 -- common/autotest_common.sh@817 -- # '[' -z 109242 ']' 00:33:46.955 11:21:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.955 11:21:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:46.955 11:21:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.955 11:21:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:46.955 11:21:15 -- common/autotest_common.sh@10 -- # set +x 00:33:46.955 [2024-04-18 11:21:15.525991] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:33:46.955 [2024-04-18 11:21:15.526116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.212 [2024-04-18 11:21:15.666789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.212 [2024-04-18 11:21:15.747130] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.213 [2024-04-18 11:21:15.747193] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.213 [2024-04-18 11:21:15.747209] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.213 [2024-04-18 11:21:15.747220] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.213 [2024-04-18 11:21:15.747229] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.213 [2024-04-18 11:21:15.747262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.164 11:21:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:48.164 11:21:16 -- common/autotest_common.sh@850 -- # return 0 00:33:48.164 11:21:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:48.164 11:21:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:48.164 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:33:48.164 11:21:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.164 11:21:16 -- target/dif.sh@139 -- # create_transport 00:33:48.164 11:21:16 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:48.164 11:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.164 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:33:48.164 [2024-04-18 11:21:16.551432] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.164 11:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.164 11:21:16 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:48.164 11:21:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:48.164 11:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:48.164 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:33:48.164 ************************************ 00:33:48.164 START TEST fio_dif_1_default 00:33:48.164 ************************************ 00:33:48.164 11:21:16 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:33:48.164 11:21:16 -- target/dif.sh@86 -- # create_subsystems 0 00:33:48.164 11:21:16 -- target/dif.sh@28 -- # local sub 00:33:48.164 11:21:16 -- target/dif.sh@30 -- # for sub in "$@" 00:33:48.164 11:21:16 -- target/dif.sh@31 -- # create_subsystem 0 00:33:48.164 11:21:16 -- target/dif.sh@18 -- # local sub_id=0 00:33:48.164 11:21:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:48.164 11:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.164 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:33:48.164 bdev_null0 00:33:48.164 11:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.164 11:21:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:48.164 11:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.164 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:33:48.164 11:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.164 11:21:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:48.164 11:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.164 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:33:48.164 11:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.164 11:21:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.164 11:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:48.164 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:33:48.164 [2024-04-18 11:21:16.655592] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.164 11:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:48.164 11:21:16 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:48.164 11:21:16 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:48.164 11:21:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:48.164 11:21:16 -- nvmf/common.sh@521 -- # config=() 00:33:48.164 11:21:16 -- nvmf/common.sh@521 -- # local subsystem config 00:33:48.164 11:21:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:48.164 11:21:16 -- target/dif.sh@82 -- # gen_fio_conf 00:33:48.164 11:21:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.164 11:21:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:48.164 { 00:33:48.164 "params": { 00:33:48.164 "name": "Nvme$subsystem", 00:33:48.164 "trtype": "$TEST_TRANSPORT", 00:33:48.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.164 "adrfam": "ipv4", 00:33:48.164 "trsvcid": "$NVMF_PORT", 00:33:48.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.164 "hdgst": ${hdgst:-false}, 00:33:48.164 "ddgst": ${ddgst:-false} 00:33:48.164 }, 00:33:48.164 "method": "bdev_nvme_attach_controller" 00:33:48.164 } 00:33:48.164 EOF 00:33:48.164 )") 00:33:48.164 11:21:16 -- target/dif.sh@54 -- # local file 00:33:48.164 11:21:16 -- target/dif.sh@56 -- # cat 00:33:48.164 11:21:16 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.164 11:21:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:48.164 11:21:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.164 11:21:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:48.164 11:21:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:48.164 11:21:16 -- nvmf/common.sh@543 -- # cat 00:33:48.164 11:21:16 -- common/autotest_common.sh@1327 -- # shift 00:33:48.164 11:21:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:48.164 11:21:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.164 11:21:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:48.164 11:21:16 -- target/dif.sh@72 -- # (( file <= files )) 00:33:48.164 11:21:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:48.164 11:21:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:48.164 11:21:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:48.164 11:21:16 -- nvmf/common.sh@545 -- # jq . 00:33:48.164 11:21:16 -- nvmf/common.sh@546 -- # IFS=, 00:33:48.164 11:21:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:48.164 "params": { 00:33:48.164 "name": "Nvme0", 00:33:48.164 "trtype": "tcp", 00:33:48.165 "traddr": "10.0.0.2", 00:33:48.165 "adrfam": "ipv4", 00:33:48.165 "trsvcid": "4420", 00:33:48.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.165 "hdgst": false, 00:33:48.165 "ddgst": false 00:33:48.165 }, 00:33:48.165 "method": "bdev_nvme_attach_controller" 00:33:48.165 }' 00:33:48.165 11:21:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:48.165 11:21:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:48.165 11:21:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.165 11:21:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:48.165 11:21:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:48.165 11:21:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:33:48.165 11:21:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:33:48.165 11:21:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:33:48.165 11:21:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:48.165 11:21:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.433 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:48.433 fio-3.35 00:33:48.433 Starting 1 thread 00:34:00.627 00:34:00.627 filename0: (groupid=0, jobs=1): err= 0: pid=109331: Thu Apr 18 11:21:27 2024 00:34:00.627 read: IOPS=1692, BW=6772KiB/s (6935kB/s)(66.2MiB/10013msec) 00:34:00.627 slat (nsec): min=6107, max=85638, avg=8629.50, stdev=3118.78 00:34:00.627 clat (usec): min=410, max=42474, avg=2336.68, stdev=8466.19 00:34:00.627 lat (usec): min=417, max=42485, avg=2345.31, stdev=8466.26 00:34:00.627 clat percentiles (usec): 00:34:00.627 | 1.00th=[ 445], 5.00th=[ 453], 10.00th=[ 457], 20.00th=[ 465], 00:34:00.627 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 482], 60.00th=[ 490], 00:34:00.627 | 70.00th=[ 498], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 594], 00:34:00.627 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:34:00.627 | 99.99th=[42206] 00:34:00.627 bw ( KiB/s): min= 3008, max=11168, per=100.00%, avg=6779.20, stdev=2355.78, samples=20 00:34:00.627 iops : min= 752, max= 2792, avg=1694.80, stdev=588.94, samples=20 00:34:00.627 lat (usec) : 500=72.89%, 750=22.51% 00:34:00.627 lat (msec) : 2=0.02%, 10=0.02%, 50=4.55% 00:34:00.627 cpu : usr=90.24%, sys=8.79%, ctx=23, majf=0, minf=9 00:34:00.627 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.627 issued rwts: total=16952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.627 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:00.627 00:34:00.627 Run status group 0 (all jobs): 00:34:00.627 READ: bw=6772KiB/s (6935kB/s), 6772KiB/s-6772KiB/s (6935kB/s-6935kB/s), io=66.2MiB (69.4MB), run=10013-10013msec 00:34:00.627 11:21:27 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:00.627 11:21:27 -- target/dif.sh@43 -- # local sub 00:34:00.627 11:21:27 -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.627 11:21:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:00.627 11:21:27 -- target/dif.sh@36 -- # local sub_id=0 00:34:00.627 11:21:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:00.627 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.627 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.627 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.627 11:21:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:00.627 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.627 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.627 ************************************ 00:34:00.627 END TEST fio_dif_1_default 00:34:00.627 ************************************ 00:34:00.627 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.627 00:34:00.627 real 0m11.018s 00:34:00.627 user 0m9.671s 00:34:00.627 sys 0m1.158s 00:34:00.627 11:21:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:00.627 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.627 11:21:27 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:00.627 11:21:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:00.627 11:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:00.627 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.627 ************************************ 00:34:00.627 START TEST fio_dif_1_multi_subsystems 00:34:00.627 ************************************ 00:34:00.627 11:21:27 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:34:00.627 11:21:27 -- target/dif.sh@92 -- # local files=1 00:34:00.627 11:21:27 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:00.627 11:21:27 -- target/dif.sh@28 -- # local sub 00:34:00.627 11:21:27 -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.627 11:21:27 -- target/dif.sh@31 -- # create_subsystem 0 00:34:00.627 11:21:27 -- target/dif.sh@18 -- # local sub_id=0 00:34:00.627 11:21:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:00.627 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.627 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.627 bdev_null0 00:34:00.627 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.627 11:21:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:00.627 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.627 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.627 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.627 11:21:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:00.627 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.627 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.627 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.627 11:21:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:00.627 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.628 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.628 [2024-04-18 11:21:27.795070] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.628 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.628 11:21:27 -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.628 11:21:27 -- target/dif.sh@31 -- # create_subsystem 1 00:34:00.628 11:21:27 -- target/dif.sh@18 -- # local sub_id=1 00:34:00.628 11:21:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:00.628 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.628 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.628 bdev_null1 00:34:00.628 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.628 11:21:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:00.628 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.628 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.628 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.628 11:21:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:00.628 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.628 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.628 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.628 11:21:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.628 11:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:00.628 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:34:00.628 11:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:00.628 11:21:27 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:00.628 11:21:27 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:00.628 11:21:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:00.628 11:21:27 -- nvmf/common.sh@521 -- # config=() 00:34:00.628 11:21:27 -- nvmf/common.sh@521 -- # local subsystem config 00:34:00.628 11:21:27 -- target/dif.sh@82 -- # gen_fio_conf 00:34:00.628 11:21:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.628 11:21:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:00.628 11:21:27 -- target/dif.sh@54 -- # local file 00:34:00.628 11:21:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:00.628 { 00:34:00.628 "params": { 00:34:00.628 "name": "Nvme$subsystem", 00:34:00.628 "trtype": "$TEST_TRANSPORT", 00:34:00.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.628 "adrfam": "ipv4", 00:34:00.628 "trsvcid": "$NVMF_PORT", 00:34:00.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.628 "hdgst": ${hdgst:-false}, 00:34:00.628 "ddgst": ${ddgst:-false} 00:34:00.628 }, 00:34:00.628 "method": "bdev_nvme_attach_controller" 00:34:00.628 } 00:34:00.628 EOF 00:34:00.628 )") 00:34:00.628 11:21:27 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.628 11:21:27 -- target/dif.sh@56 -- # cat 00:34:00.628 11:21:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:00.628 11:21:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:00.628 11:21:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:00.628 11:21:27 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:00.628 11:21:27 -- common/autotest_common.sh@1327 -- # shift 00:34:00.628 11:21:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:00.628 11:21:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.628 11:21:27 -- nvmf/common.sh@543 -- # cat 00:34:00.628 11:21:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:00.628 11:21:27 -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.628 11:21:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:00.628 11:21:27 -- target/dif.sh@73 -- # cat 00:34:00.628 11:21:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:00.628 11:21:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:00.628 11:21:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:00.628 11:21:27 -- target/dif.sh@72 -- # (( file++ )) 00:34:00.628 11:21:27 -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.628 11:21:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:00.628 { 00:34:00.628 "params": { 00:34:00.628 "name": "Nvme$subsystem", 00:34:00.628 "trtype": "$TEST_TRANSPORT", 00:34:00.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.628 "adrfam": "ipv4", 00:34:00.628 "trsvcid": "$NVMF_PORT", 00:34:00.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.628 "hdgst": ${hdgst:-false}, 00:34:00.628 "ddgst": ${ddgst:-false} 00:34:00.628 }, 00:34:00.628 "method": "bdev_nvme_attach_controller" 00:34:00.628 } 00:34:00.628 EOF 00:34:00.628 )") 00:34:00.628 11:21:27 -- nvmf/common.sh@543 -- # cat 00:34:00.628 11:21:27 -- nvmf/common.sh@545 -- # jq . 00:34:00.628 11:21:27 -- nvmf/common.sh@546 -- # IFS=, 00:34:00.628 11:21:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:00.628 "params": { 00:34:00.628 "name": "Nvme0", 00:34:00.628 "trtype": "tcp", 00:34:00.628 "traddr": "10.0.0.2", 00:34:00.628 "adrfam": "ipv4", 00:34:00.628 "trsvcid": "4420", 00:34:00.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:00.628 "hdgst": false, 00:34:00.628 "ddgst": false 00:34:00.628 }, 00:34:00.628 "method": "bdev_nvme_attach_controller" 00:34:00.628 },{ 00:34:00.628 "params": { 00:34:00.628 "name": "Nvme1", 00:34:00.628 "trtype": "tcp", 00:34:00.628 "traddr": "10.0.0.2", 00:34:00.628 "adrfam": "ipv4", 00:34:00.628 "trsvcid": "4420", 00:34:00.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:00.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:00.629 "hdgst": false, 00:34:00.629 "ddgst": false 00:34:00.629 }, 00:34:00.629 "method": "bdev_nvme_attach_controller" 00:34:00.629 }' 00:34:00.629 11:21:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:00.629 11:21:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:00.629 11:21:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.629 11:21:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:00.629 11:21:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:00.629 11:21:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:00.629 11:21:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:00.629 11:21:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:00.629 11:21:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:00.629 11:21:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.629 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.629 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.629 fio-3.35 00:34:00.629 Starting 2 threads 00:34:10.620 00:34:10.620 filename0: (groupid=0, jobs=1): err= 0: pid=109495: Thu Apr 18 11:21:38 2024 00:34:10.620 read: IOPS=208, BW=833KiB/s (853kB/s)(8336KiB/10007msec) 00:34:10.620 slat (nsec): min=7415, max=46977, avg=10267.02, stdev=4820.72 00:34:10.621 clat (usec): min=434, max=42484, avg=19174.47, stdev=20217.61 00:34:10.621 lat (usec): min=442, max=42495, avg=19184.74, stdev=20217.32 00:34:10.621 clat percentiles (usec): 00:34:10.621 | 1.00th=[ 457], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 490], 00:34:10.621 | 30.00th=[ 506], 40.00th=[ 537], 50.00th=[ 865], 60.00th=[40633], 00:34:10.621 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:10.621 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:10.621 | 99.99th=[42730] 00:34:10.621 bw ( KiB/s): min= 640, max= 1088, per=51.47%, avg=832.05, stdev=139.61, samples=20 00:34:10.621 iops : min= 160, max= 272, avg=208.00, stdev=34.92, samples=20 00:34:10.621 lat (usec) : 500=27.45%, 750=16.79%, 1000=9.69% 00:34:10.621 lat (msec) : 4=0.19%, 50=45.87% 00:34:10.621 cpu : usr=95.01%, sys=4.53%, ctx=110, majf=0, minf=9 00:34:10.621 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.621 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.621 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:10.621 filename1: (groupid=0, jobs=1): err= 0: pid=109496: Thu Apr 18 11:21:38 2024 00:34:10.621 read: IOPS=195, BW=784KiB/s (802kB/s)(7840KiB/10006msec) 00:34:10.621 slat (nsec): min=5091, max=56095, avg=11797.44, stdev=7586.14 00:34:10.621 clat (usec): min=453, max=42963, avg=20380.81, stdev=20310.52 00:34:10.621 lat (usec): min=461, max=42993, avg=20392.61, stdev=20310.41 00:34:10.621 clat percentiles (usec): 00:34:10.621 | 1.00th=[ 465], 5.00th=[ 482], 10.00th=[ 490], 20.00th=[ 510], 00:34:10.621 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 914], 60.00th=[41157], 00:34:10.621 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:10.621 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:10.621 | 99.99th=[42730] 00:34:10.621 bw ( KiB/s): min= 608, max= 1120, per=48.62%, avg=786.53, stdev=142.04, samples=19 00:34:10.621 iops : min= 152, max= 280, avg=196.63, stdev=35.51, samples=19 00:34:10.621 lat (usec) : 500=14.80%, 750=29.90%, 1000=6.33% 00:34:10.621 lat (msec) : 4=0.20%, 50=48.78% 00:34:10.621 cpu : usr=95.20%, sys=4.28%, ctx=17, majf=0, minf=9 00:34:10.621 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.621 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.621 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:10.621 00:34:10.621 Run status group 0 (all jobs): 00:34:10.621 READ: bw=1616KiB/s (1655kB/s), 784KiB/s-833KiB/s (802kB/s-853kB/s), io=15.8MiB (16.6MB), run=10006-10007msec 00:34:10.621 11:21:38 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:10.621 11:21:38 -- target/dif.sh@43 -- # local sub 00:34:10.621 11:21:38 -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.621 11:21:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.621 11:21:38 -- target/dif.sh@36 -- # local sub_id=0 00:34:10.621 11:21:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.621 11:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 11:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 11:21:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.621 11:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 11:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 11:21:38 -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.621 11:21:38 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:10.621 11:21:38 -- target/dif.sh@36 -- # local sub_id=1 00:34:10.621 11:21:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.621 11:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 11:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 11:21:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:10.621 11:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 ************************************ 00:34:10.621 END TEST fio_dif_1_multi_subsystems 00:34:10.621 ************************************ 00:34:10.621 11:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 00:34:10.621 real 0m11.197s 00:34:10.621 user 0m19.837s 00:34:10.621 sys 0m1.157s 00:34:10.621 11:21:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:10.621 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 11:21:38 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:10.621 11:21:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:10.621 11:21:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:10.621 11:21:38 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 ************************************ 00:34:10.621 START TEST fio_dif_rand_params 00:34:10.621 ************************************ 00:34:10.621 11:21:39 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:34:10.621 11:21:39 -- target/dif.sh@100 -- # local NULL_DIF 00:34:10.621 11:21:39 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:10.621 11:21:39 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:10.621 11:21:39 -- target/dif.sh@103 -- # bs=128k 00:34:10.621 11:21:39 -- target/dif.sh@103 -- # numjobs=3 00:34:10.621 11:21:39 -- target/dif.sh@103 -- # iodepth=3 00:34:10.621 11:21:39 -- target/dif.sh@103 -- # runtime=5 00:34:10.621 11:21:39 -- target/dif.sh@105 -- # create_subsystems 0 00:34:10.621 11:21:39 -- target/dif.sh@28 -- # local sub 00:34:10.621 11:21:39 -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.621 11:21:39 -- target/dif.sh@31 -- # create_subsystem 0 00:34:10.621 11:21:39 -- target/dif.sh@18 -- # local sub_id=0 00:34:10.621 11:21:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:10.621 11:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 bdev_null0 00:34:10.621 11:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 11:21:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:10.621 11:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 11:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 11:21:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:10.621 11:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 11:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 11:21:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.621 11:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:10.621 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:34:10.621 [2024-04-18 11:21:39.102657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.621 11:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:10.621 11:21:39 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:10.621 11:21:39 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:10.621 11:21:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:10.621 11:21:39 -- nvmf/common.sh@521 -- # config=() 00:34:10.621 11:21:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.621 11:21:39 -- target/dif.sh@82 -- # gen_fio_conf 00:34:10.621 11:21:39 -- nvmf/common.sh@521 -- # local subsystem config 00:34:10.621 11:21:39 -- target/dif.sh@54 -- # local file 00:34:10.621 11:21:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:10.621 11:21:39 -- target/dif.sh@56 -- # cat 00:34:10.621 11:21:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:10.621 { 00:34:10.621 "params": { 00:34:10.621 "name": "Nvme$subsystem", 00:34:10.621 "trtype": "$TEST_TRANSPORT", 00:34:10.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.621 "adrfam": "ipv4", 00:34:10.621 "trsvcid": "$NVMF_PORT", 00:34:10.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.621 "hdgst": ${hdgst:-false}, 00:34:10.621 "ddgst": ${ddgst:-false} 00:34:10.621 }, 00:34:10.621 "method": "bdev_nvme_attach_controller" 00:34:10.621 } 00:34:10.621 EOF 00:34:10.621 )") 00:34:10.621 11:21:39 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.621 11:21:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:10.621 11:21:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.621 11:21:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:10.621 11:21:39 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:10.621 11:21:39 -- nvmf/common.sh@543 -- # cat 00:34:10.621 11:21:39 -- common/autotest_common.sh@1327 -- # shift 00:34:10.621 11:21:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:10.621 11:21:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.621 11:21:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:10.621 11:21:39 -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:10.621 11:21:39 -- nvmf/common.sh@545 -- # jq . 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:10.621 11:21:39 -- nvmf/common.sh@546 -- # IFS=, 00:34:10.621 11:21:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:10.621 "params": { 00:34:10.621 "name": "Nvme0", 00:34:10.621 "trtype": "tcp", 00:34:10.621 "traddr": "10.0.0.2", 00:34:10.621 "adrfam": "ipv4", 00:34:10.621 "trsvcid": "4420", 00:34:10.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.621 "hdgst": false, 00:34:10.621 "ddgst": false 00:34:10.621 }, 00:34:10.621 "method": "bdev_nvme_attach_controller" 00:34:10.621 }' 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:10.621 11:21:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:10.621 11:21:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:10.621 11:21:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:10.621 11:21:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:10.621 11:21:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:10.621 11:21:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.879 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:10.879 ... 00:34:10.879 fio-3.35 00:34:10.879 Starting 3 threads 00:34:17.442 00:34:17.442 filename0: (groupid=0, jobs=1): err= 0: pid=109652: Thu Apr 18 11:21:44 2024 00:34:17.442 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(154MiB/5006msec) 00:34:17.442 slat (nsec): min=7436, max=44512, avg=11885.18, stdev=3404.76 00:34:17.442 clat (usec): min=5904, max=52802, avg=12190.76, stdev=5043.70 00:34:17.442 lat (usec): min=5916, max=52813, avg=12202.65, stdev=5043.57 00:34:17.442 clat percentiles (usec): 00:34:17.442 | 1.00th=[ 6718], 5.00th=[ 8160], 10.00th=[10290], 20.00th=[10945], 00:34:17.442 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:34:17.442 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:34:17.442 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:34:17.442 | 99.99th=[52691] 00:34:17.442 bw ( KiB/s): min=28928, max=34304, per=33.79%, avg=31436.80, stdev=1760.43, samples=10 00:34:17.442 iops : min= 226, max= 268, avg=245.60, stdev=13.75, samples=10 00:34:17.443 lat (msec) : 10=8.70%, 20=89.84%, 100=1.46% 00:34:17.443 cpu : usr=92.57%, sys=5.97%, ctx=17, majf=0, minf=0 00:34:17.443 IO depths : 1=4.3%, 2=95.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.443 issued rwts: total=1230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.443 filename0: (groupid=0, jobs=1): err= 0: pid=109653: Thu Apr 18 11:21:44 2024 00:34:17.443 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(131MiB/5006msec) 00:34:17.443 slat (nsec): min=6094, max=57861, avg=11710.66, stdev=5608.27 00:34:17.443 clat (usec): min=8292, max=16868, avg=14313.13, stdev=2084.94 00:34:17.443 lat (usec): min=8306, max=16884, avg=14324.84, stdev=2085.04 00:34:17.443 clat percentiles (usec): 00:34:17.443 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[13960], 00:34:17.443 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:34:17.443 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15926], 95.00th=[16319], 00:34:17.443 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:34:17.443 | 99.99th=[16909] 00:34:17.443 bw ( KiB/s): min=25344, max=29952, per=28.73%, avg=26726.40, stdev=1344.91, samples=10 00:34:17.443 iops : min= 198, max= 234, avg=208.80, stdev=10.51, samples=10 00:34:17.443 lat (msec) : 10=10.79%, 20=89.21% 00:34:17.443 cpu : usr=92.09%, sys=6.29%, ctx=57, majf=0, minf=0 00:34:17.443 IO depths : 1=33.1%, 2=66.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.443 issued rwts: total=1047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.443 filename0: (groupid=0, jobs=1): err= 0: pid=109654: Thu Apr 18 11:21:44 2024 00:34:17.443 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(170MiB/5008msec) 00:34:17.443 slat (usec): min=7, max=185, avg=12.81, stdev= 5.91 00:34:17.443 clat (usec): min=6201, max=52538, avg=11004.49, stdev=4309.06 00:34:17.443 lat (usec): min=6212, max=52549, avg=11017.30, stdev=4309.03 00:34:17.443 clat percentiles (usec): 00:34:17.443 | 1.00th=[ 6521], 5.00th=[ 7767], 10.00th=[ 9372], 20.00th=[10028], 00:34:17.443 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:34:17.443 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:34:17.443 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:34:17.443 | 99.99th=[52691] 00:34:17.443 bw ( KiB/s): min=30658, max=37632, per=37.41%, avg=34809.80, stdev=2352.17, samples=10 00:34:17.443 iops : min= 239, max= 294, avg=271.90, stdev=18.48, samples=10 00:34:17.443 lat (msec) : 10=19.74%, 20=79.16%, 50=0.22%, 100=0.88% 00:34:17.443 cpu : usr=91.45%, sys=6.75%, ctx=7, majf=0, minf=0 00:34:17.443 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.443 issued rwts: total=1363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.443 00:34:17.443 Run status group 0 (all jobs): 00:34:17.443 READ: bw=90.9MiB/s (95.3MB/s), 26.1MiB/s-34.0MiB/s (27.4MB/s-35.7MB/s), io=455MiB (477MB), run=5006-5008msec 00:34:17.443 11:21:45 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:17.443 11:21:45 -- target/dif.sh@43 -- # local sub 00:34:17.443 11:21:45 -- target/dif.sh@45 -- # for sub in "$@" 00:34:17.443 11:21:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:17.443 11:21:45 -- target/dif.sh@36 -- # local sub_id=0 00:34:17.443 11:21:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:17.443 11:21:45 -- target/dif.sh@109 -- # bs=4k 00:34:17.443 11:21:45 -- target/dif.sh@109 -- # numjobs=8 00:34:17.443 11:21:45 -- target/dif.sh@109 -- # iodepth=16 00:34:17.443 11:21:45 -- target/dif.sh@109 -- # runtime= 00:34:17.443 11:21:45 -- target/dif.sh@109 -- # files=2 00:34:17.443 11:21:45 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:17.443 11:21:45 -- target/dif.sh@28 -- # local sub 00:34:17.443 11:21:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.443 11:21:45 -- target/dif.sh@31 -- # create_subsystem 0 00:34:17.443 11:21:45 -- target/dif.sh@18 -- # local sub_id=0 00:34:17.443 11:21:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 bdev_null0 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 [2024-04-18 11:21:45.115515] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.443 11:21:45 -- target/dif.sh@31 -- # create_subsystem 1 00:34:17.443 11:21:45 -- target/dif.sh@18 -- # local sub_id=1 00:34:17.443 11:21:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 bdev_null1 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.443 11:21:45 -- target/dif.sh@31 -- # create_subsystem 2 00:34:17.443 11:21:45 -- target/dif.sh@18 -- # local sub_id=2 00:34:17.443 11:21:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 bdev_null2 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:17.443 11:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:17.443 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:34:17.443 11:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:17.443 11:21:45 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:17.443 11:21:45 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:17.443 11:21:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:17.443 11:21:45 -- nvmf/common.sh@521 -- # config=() 00:34:17.443 11:21:45 -- nvmf/common.sh@521 -- # local subsystem config 00:34:17.443 11:21:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:17.444 11:21:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:17.444 { 00:34:17.444 "params": { 00:34:17.444 "name": "Nvme$subsystem", 00:34:17.444 "trtype": "$TEST_TRANSPORT", 00:34:17.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.444 "adrfam": "ipv4", 00:34:17.444 "trsvcid": "$NVMF_PORT", 00:34:17.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.444 "hdgst": ${hdgst:-false}, 00:34:17.444 "ddgst": ${ddgst:-false} 00:34:17.444 }, 00:34:17.444 "method": "bdev_nvme_attach_controller" 00:34:17.444 } 00:34:17.444 EOF 00:34:17.444 )") 00:34:17.444 11:21:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.444 11:21:45 -- target/dif.sh@82 -- # gen_fio_conf 00:34:17.444 11:21:45 -- target/dif.sh@54 -- # local file 00:34:17.444 11:21:45 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.444 11:21:45 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:17.444 11:21:45 -- target/dif.sh@56 -- # cat 00:34:17.444 11:21:45 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:17.444 11:21:45 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:17.444 11:21:45 -- nvmf/common.sh@543 -- # cat 00:34:17.444 11:21:45 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:17.444 11:21:45 -- common/autotest_common.sh@1327 -- # shift 00:34:17.444 11:21:45 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:17.444 11:21:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.444 11:21:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:17.444 11:21:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.444 11:21:45 -- target/dif.sh@73 -- # cat 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:17.444 11:21:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:17.444 11:21:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:17.444 { 00:34:17.444 "params": { 00:34:17.444 "name": "Nvme$subsystem", 00:34:17.444 "trtype": "$TEST_TRANSPORT", 00:34:17.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.444 "adrfam": "ipv4", 00:34:17.444 "trsvcid": "$NVMF_PORT", 00:34:17.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.444 "hdgst": ${hdgst:-false}, 00:34:17.444 "ddgst": ${ddgst:-false} 00:34:17.444 }, 00:34:17.444 "method": "bdev_nvme_attach_controller" 00:34:17.444 } 00:34:17.444 EOF 00:34:17.444 )") 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:17.444 11:21:45 -- nvmf/common.sh@543 -- # cat 00:34:17.444 11:21:45 -- target/dif.sh@72 -- # (( file++ )) 00:34:17.444 11:21:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.444 11:21:45 -- target/dif.sh@73 -- # cat 00:34:17.444 11:21:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:17.444 11:21:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:17.444 { 00:34:17.444 "params": { 00:34:17.444 "name": "Nvme$subsystem", 00:34:17.444 "trtype": "$TEST_TRANSPORT", 00:34:17.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.444 "adrfam": "ipv4", 00:34:17.444 "trsvcid": "$NVMF_PORT", 00:34:17.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.444 "hdgst": ${hdgst:-false}, 00:34:17.444 "ddgst": ${ddgst:-false} 00:34:17.444 }, 00:34:17.444 "method": "bdev_nvme_attach_controller" 00:34:17.444 } 00:34:17.444 EOF 00:34:17.444 )") 00:34:17.444 11:21:45 -- target/dif.sh@72 -- # (( file++ )) 00:34:17.444 11:21:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.444 11:21:45 -- nvmf/common.sh@543 -- # cat 00:34:17.444 11:21:45 -- nvmf/common.sh@545 -- # jq . 00:34:17.444 11:21:45 -- nvmf/common.sh@546 -- # IFS=, 00:34:17.444 11:21:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:17.444 "params": { 00:34:17.444 "name": "Nvme0", 00:34:17.444 "trtype": "tcp", 00:34:17.444 "traddr": "10.0.0.2", 00:34:17.444 "adrfam": "ipv4", 00:34:17.444 "trsvcid": "4420", 00:34:17.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:17.444 "hdgst": false, 00:34:17.444 "ddgst": false 00:34:17.444 }, 00:34:17.444 "method": "bdev_nvme_attach_controller" 00:34:17.444 },{ 00:34:17.444 "params": { 00:34:17.444 "name": "Nvme1", 00:34:17.444 "trtype": "tcp", 00:34:17.444 "traddr": "10.0.0.2", 00:34:17.444 "adrfam": "ipv4", 00:34:17.444 "trsvcid": "4420", 00:34:17.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.444 "hdgst": false, 00:34:17.444 "ddgst": false 00:34:17.444 }, 00:34:17.444 "method": "bdev_nvme_attach_controller" 00:34:17.444 },{ 00:34:17.444 "params": { 00:34:17.444 "name": "Nvme2", 00:34:17.444 "trtype": "tcp", 00:34:17.444 "traddr": "10.0.0.2", 00:34:17.444 "adrfam": "ipv4", 00:34:17.444 "trsvcid": "4420", 00:34:17.444 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:17.444 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:17.444 "hdgst": false, 00:34:17.444 "ddgst": false 00:34:17.444 }, 00:34:17.444 "method": "bdev_nvme_attach_controller" 00:34:17.444 }' 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:17.444 11:21:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:17.444 11:21:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:17.444 11:21:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:17.444 11:21:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:17.444 11:21:45 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:17.444 11:21:45 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.444 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.444 ... 00:34:17.444 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.444 ... 00:34:17.444 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.444 ... 00:34:17.444 fio-3.35 00:34:17.444 Starting 24 threads 00:34:29.805 00:34:29.805 filename0: (groupid=0, jobs=1): err= 0: pid=109751: Thu Apr 18 11:21:56 2024 00:34:29.805 read: IOPS=207, BW=830KiB/s (850kB/s)(8348KiB/10056msec) 00:34:29.805 slat (usec): min=6, max=8022, avg=16.66, stdev=196.08 00:34:29.805 clat (msec): min=17, max=143, avg=76.90, stdev=21.25 00:34:29.805 lat (msec): min=17, max=143, avg=76.92, stdev=21.25 00:34:29.805 clat percentiles (msec): 00:34:29.805 | 1.00th=[ 26], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:34:29.805 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:34:29.805 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 115], 00:34:29.805 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:34:29.805 | 99.99th=[ 144] 00:34:29.805 bw ( KiB/s): min= 688, max= 1134, per=4.73%, avg=827.55, stdev=110.12, samples=20 00:34:29.805 iops : min= 172, max= 283, avg=206.85, stdev=27.46, samples=20 00:34:29.805 lat (msec) : 20=0.53%, 50=12.75%, 100=74.08%, 250=12.65% 00:34:29.805 cpu : usr=37.43%, sys=0.94%, ctx=1090, majf=0, minf=9 00:34:29.805 IO depths : 1=0.1%, 2=0.2%, 4=5.6%, 8=80.0%, 16=14.1%, 32=0.0%, >=64=0.0% 00:34:29.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 complete : 0=0.0%, 4=89.0%, 8=7.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 issued rwts: total=2087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.805 filename0: (groupid=0, jobs=1): err= 0: pid=109752: Thu Apr 18 11:21:56 2024 00:34:29.805 read: IOPS=179, BW=718KiB/s (735kB/s)(7200KiB/10026msec) 00:34:29.805 slat (usec): min=4, max=8023, avg=26.94, stdev=328.21 00:34:29.805 clat (msec): min=26, max=177, avg=88.85, stdev=28.14 00:34:29.805 lat (msec): min=26, max=177, avg=88.88, stdev=28.14 00:34:29.805 clat percentiles (msec): 00:34:29.805 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 67], 00:34:29.805 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 99], 00:34:29.805 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 140], 00:34:29.805 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 178], 99.95th=[ 178], 00:34:29.805 | 99.99th=[ 178] 00:34:29.805 bw ( KiB/s): min= 512, max= 944, per=3.95%, avg=690.53, stdev=126.46, samples=19 00:34:29.805 iops : min= 128, max= 236, avg=172.63, stdev=31.62, samples=19 00:34:29.805 lat (msec) : 50=8.39%, 100=54.44%, 250=37.17% 00:34:29.805 cpu : usr=38.99%, sys=1.20%, ctx=1084, majf=0, minf=9 00:34:29.805 IO depths : 1=2.1%, 2=4.7%, 4=12.9%, 8=69.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:34:29.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.805 filename0: (groupid=0, jobs=1): err= 0: pid=109753: Thu Apr 18 11:21:56 2024 00:34:29.805 read: IOPS=187, BW=750KiB/s (768kB/s)(7532KiB/10037msec) 00:34:29.805 slat (nsec): min=4995, max=40136, avg=11255.78, stdev=4239.65 00:34:29.805 clat (msec): min=32, max=190, avg=85.14, stdev=28.28 00:34:29.805 lat (msec): min=32, max=190, avg=85.15, stdev=28.28 00:34:29.805 clat percentiles (msec): 00:34:29.805 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:34:29.805 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 85], 00:34:29.805 | 70.00th=[ 97], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 144], 00:34:29.805 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 190], 99.95th=[ 190], 00:34:29.805 | 99.99th=[ 190] 00:34:29.805 bw ( KiB/s): min= 464, max= 992, per=4.28%, avg=748.15, stdev=146.55, samples=20 00:34:29.805 iops : min= 116, max= 248, avg=187.00, stdev=36.65, samples=20 00:34:29.805 lat (msec) : 50=12.06%, 100=61.07%, 250=26.87% 00:34:29.805 cpu : usr=32.20%, sys=0.96%, ctx=875, majf=0, minf=9 00:34:29.805 IO depths : 1=0.6%, 2=1.3%, 4=7.3%, 8=77.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:34:29.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 complete : 0=0.0%, 4=89.1%, 8=7.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 issued rwts: total=1883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.805 filename0: (groupid=0, jobs=1): err= 0: pid=109754: Thu Apr 18 11:21:56 2024 00:34:29.805 read: IOPS=159, BW=638KiB/s (653kB/s)(6380KiB/10007msec) 00:34:29.805 slat (usec): min=4, max=8026, avg=26.36, stdev=347.44 00:34:29.805 clat (msec): min=38, max=190, avg=100.15, stdev=27.93 00:34:29.805 lat (msec): min=38, max=190, avg=100.18, stdev=27.92 00:34:29.805 clat percentiles (msec): 00:34:29.805 | 1.00th=[ 51], 5.00th=[ 62], 10.00th=[ 72], 20.00th=[ 73], 00:34:29.805 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 96], 60.00th=[ 108], 00:34:29.805 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 150], 00:34:29.805 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 190], 99.95th=[ 190], 00:34:29.805 | 99.99th=[ 190] 00:34:29.805 bw ( KiB/s): min= 384, max= 784, per=3.57%, avg=624.42, stdev=125.40, samples=19 00:34:29.805 iops : min= 96, max= 196, avg=156.11, stdev=31.35, samples=19 00:34:29.805 lat (msec) : 50=0.69%, 100=51.22%, 250=48.09% 00:34:29.805 cpu : usr=32.70%, sys=0.93%, ctx=879, majf=0, minf=9 00:34:29.805 IO depths : 1=3.3%, 2=7.0%, 4=18.3%, 8=61.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:34:29.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 issued rwts: total=1595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.805 filename0: (groupid=0, jobs=1): err= 0: pid=109755: Thu Apr 18 11:21:56 2024 00:34:29.805 read: IOPS=195, BW=780KiB/s (799kB/s)(7808KiB/10005msec) 00:34:29.805 slat (nsec): min=7530, max=47182, avg=11324.74, stdev=4928.86 00:34:29.805 clat (msec): min=13, max=191, avg=81.92, stdev=29.33 00:34:29.805 lat (msec): min=13, max=191, avg=81.93, stdev=29.33 00:34:29.805 clat percentiles (msec): 00:34:29.805 | 1.00th=[ 14], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:34:29.805 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:34:29.805 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 133], 00:34:29.805 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 192], 00:34:29.805 | 99.99th=[ 192] 00:34:29.805 bw ( KiB/s): min= 464, max= 1282, per=4.43%, avg=774.84, stdev=212.47, samples=19 00:34:29.805 iops : min= 116, max= 320, avg=193.68, stdev=53.05, samples=19 00:34:29.805 lat (msec) : 20=1.64%, 50=12.81%, 100=59.12%, 250=26.43% 00:34:29.805 cpu : usr=39.80%, sys=1.04%, ctx=1141, majf=0, minf=9 00:34:29.805 IO depths : 1=1.7%, 2=4.4%, 4=13.8%, 8=68.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:34:29.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.805 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename0: (groupid=0, jobs=1): err= 0: pid=109756: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=173, BW=693KiB/s (710kB/s)(6944KiB/10022msec) 00:34:29.806 slat (usec): min=5, max=8072, avg=25.82, stdev=323.96 00:34:29.806 clat (msec): min=34, max=173, avg=92.22, stdev=25.62 00:34:29.806 lat (msec): min=34, max=173, avg=92.25, stdev=25.62 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 41], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 72], 00:34:29.806 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 101], 00:34:29.806 | 70.00th=[ 110], 80.00th=[ 115], 90.00th=[ 124], 95.00th=[ 136], 00:34:29.806 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 174], 00:34:29.806 | 99.99th=[ 174] 00:34:29.806 bw ( KiB/s): min= 512, max= 992, per=3.91%, avg=683.84, stdev=147.05, samples=19 00:34:29.806 iops : min= 128, max= 248, avg=170.95, stdev=36.75, samples=19 00:34:29.806 lat (msec) : 50=5.18%, 100=55.18%, 250=39.63% 00:34:29.806 cpu : usr=37.91%, sys=1.14%, ctx=1190, majf=0, minf=9 00:34:29.806 IO depths : 1=2.4%, 2=5.2%, 4=13.9%, 8=67.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:34:29.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 issued rwts: total=1736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename0: (groupid=0, jobs=1): err= 0: pid=109757: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=162, BW=649KiB/s (664kB/s)(6488KiB/10004msec) 00:34:29.806 slat (usec): min=4, max=4022, avg=13.45, stdev=99.70 00:34:29.806 clat (msec): min=3, max=196, avg=98.59, stdev=31.47 00:34:29.806 lat (msec): min=3, max=196, avg=98.60, stdev=31.47 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 21], 5.00th=[ 50], 10.00th=[ 68], 20.00th=[ 72], 00:34:29.806 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 99], 60.00th=[ 111], 00:34:29.806 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 140], 95.00th=[ 155], 00:34:29.806 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 197], 99.95th=[ 197], 00:34:29.806 | 99.99th=[ 197] 00:34:29.806 bw ( KiB/s): min= 512, max= 864, per=3.60%, avg=629.05, stdev=129.33, samples=19 00:34:29.806 iops : min= 128, max= 216, avg=157.26, stdev=32.33, samples=19 00:34:29.806 lat (msec) : 4=0.99%, 50=4.32%, 100=45.25%, 250=49.45% 00:34:29.806 cpu : usr=41.95%, sys=1.45%, ctx=1249, majf=0, minf=9 00:34:29.806 IO depths : 1=3.6%, 2=7.6%, 4=18.2%, 8=61.3%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:29.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 issued rwts: total=1622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename0: (groupid=0, jobs=1): err= 0: pid=109758: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=167, BW=669KiB/s (685kB/s)(6696KiB/10012msec) 00:34:29.806 slat (nsec): min=7570, max=37270, avg=11146.34, stdev=4267.11 00:34:29.806 clat (msec): min=18, max=219, avg=95.60, stdev=30.59 00:34:29.806 lat (msec): min=18, max=219, avg=95.61, stdev=30.59 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 72], 00:34:29.806 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 107], 00:34:29.806 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 146], 00:34:29.806 | 99.00th=[ 184], 99.50th=[ 197], 99.90th=[ 220], 99.95th=[ 220], 00:34:29.806 | 99.99th=[ 220] 00:34:29.806 bw ( KiB/s): min= 400, max= 944, per=3.76%, avg=657.58, stdev=164.74, samples=19 00:34:29.806 iops : min= 100, max= 236, avg=164.37, stdev=41.19, samples=19 00:34:29.806 lat (msec) : 20=0.30%, 50=4.18%, 100=50.42%, 250=45.10% 00:34:29.806 cpu : usr=38.61%, sys=1.06%, ctx=1132, majf=0, minf=9 00:34:29.806 IO depths : 1=2.7%, 2=6.0%, 4=15.5%, 8=65.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:34:29.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 complete : 0=0.0%, 4=91.4%, 8=3.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename1: (groupid=0, jobs=1): err= 0: pid=109759: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=167, BW=669KiB/s (685kB/s)(6696KiB/10016msec) 00:34:29.806 slat (usec): min=4, max=5106, avg=16.65, stdev=155.28 00:34:29.806 clat (msec): min=35, max=169, avg=95.60, stdev=26.59 00:34:29.806 lat (msec): min=35, max=169, avg=95.62, stdev=26.59 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 36], 5.00th=[ 53], 10.00th=[ 63], 20.00th=[ 72], 00:34:29.806 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 109], 00:34:29.806 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 142], 00:34:29.806 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:34:29.806 | 99.99th=[ 169] 00:34:29.806 bw ( KiB/s): min= 512, max= 944, per=3.76%, avg=657.68, stdev=138.63, samples=19 00:34:29.806 iops : min= 128, max= 236, avg=164.42, stdev=34.66, samples=19 00:34:29.806 lat (msec) : 50=4.42%, 100=48.27%, 250=47.31% 00:34:29.806 cpu : usr=43.40%, sys=1.11%, ctx=1196, majf=0, minf=9 00:34:29.806 IO depths : 1=3.5%, 2=7.9%, 4=19.2%, 8=60.2%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:29.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename1: (groupid=0, jobs=1): err= 0: pid=109760: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=175, BW=702KiB/s (719kB/s)(7044KiB/10030msec) 00:34:29.806 slat (usec): min=5, max=8022, avg=17.41, stdev=206.68 00:34:29.806 clat (msec): min=35, max=191, avg=90.92, stdev=30.24 00:34:29.806 lat (msec): min=35, max=191, avg=90.94, stdev=30.25 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 62], 00:34:29.806 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 96], 00:34:29.806 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 133], 95.00th=[ 146], 00:34:29.806 | 99.00th=[ 161], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 192], 00:34:29.806 | 99.99th=[ 192] 00:34:29.806 bw ( KiB/s): min= 464, max= 944, per=3.89%, avg=680.00, stdev=146.91, samples=19 00:34:29.806 iops : min= 116, max= 236, avg=170.00, stdev=36.73, samples=19 00:34:29.806 lat (msec) : 50=7.95%, 100=57.58%, 250=34.47% 00:34:29.806 cpu : usr=34.14%, sys=0.99%, ctx=929, majf=0, minf=9 00:34:29.806 IO depths : 1=2.0%, 2=4.0%, 4=11.9%, 8=70.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:34:29.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 complete : 0=0.0%, 4=90.4%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 issued rwts: total=1761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename1: (groupid=0, jobs=1): err= 0: pid=109761: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=210, BW=842KiB/s (862kB/s)(8456KiB/10046msec) 00:34:29.806 slat (usec): min=7, max=8044, avg=15.60, stdev=174.85 00:34:29.806 clat (msec): min=17, max=147, avg=75.84, stdev=22.28 00:34:29.806 lat (msec): min=17, max=147, avg=75.85, stdev=22.29 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:34:29.806 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 81], 00:34:29.806 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 118], 00:34:29.806 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:34:29.806 | 99.99th=[ 148] 00:34:29.806 bw ( KiB/s): min= 640, max= 1152, per=4.80%, avg=839.00, stdev=148.64, samples=20 00:34:29.806 iops : min= 160, max= 288, avg=209.75, stdev=37.16, samples=20 00:34:29.806 lat (msec) : 20=0.66%, 50=16.27%, 100=68.54%, 250=14.52% 00:34:29.806 cpu : usr=37.36%, sys=1.12%, ctx=1120, majf=0, minf=10 00:34:29.806 IO depths : 1=0.1%, 2=0.4%, 4=5.3%, 8=80.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:34:29.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 complete : 0=0.0%, 4=89.0%, 8=7.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename1: (groupid=0, jobs=1): err= 0: pid=109762: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=162, BW=652KiB/s (668kB/s)(6528KiB/10014msec) 00:34:29.806 slat (usec): min=4, max=8023, avg=20.72, stdev=280.38 00:34:29.806 clat (msec): min=31, max=167, avg=98.07, stdev=27.38 00:34:29.806 lat (msec): min=31, max=167, avg=98.09, stdev=27.37 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 36], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 72], 00:34:29.806 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 105], 60.00th=[ 108], 00:34:29.806 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 144], 00:34:29.806 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:34:29.806 | 99.99th=[ 169] 00:34:29.806 bw ( KiB/s): min= 510, max= 816, per=3.65%, avg=639.95, stdev=130.47, samples=19 00:34:29.806 iops : min= 127, max= 204, avg=159.95, stdev=32.63, samples=19 00:34:29.806 lat (msec) : 50=4.04%, 100=45.47%, 250=50.49% 00:34:29.806 cpu : usr=32.10%, sys=1.07%, ctx=878, majf=0, minf=9 00:34:29.806 IO depths : 1=2.7%, 2=5.9%, 4=15.7%, 8=65.4%, 16=10.2%, 32=0.0%, >=64=0.0% 00:34:29.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.806 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.806 filename1: (groupid=0, jobs=1): err= 0: pid=109763: Thu Apr 18 11:21:56 2024 00:34:29.806 read: IOPS=184, BW=738KiB/s (756kB/s)(7408KiB/10036msec) 00:34:29.806 slat (usec): min=7, max=4020, avg=13.39, stdev=93.27 00:34:29.806 clat (msec): min=5, max=184, avg=86.53, stdev=34.77 00:34:29.806 lat (msec): min=5, max=184, avg=86.54, stdev=34.77 00:34:29.806 clat percentiles (msec): 00:34:29.806 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 50], 20.00th=[ 56], 00:34:29.806 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 95], 00:34:29.806 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 136], 95.00th=[ 148], 00:34:29.806 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 186], 00:34:29.806 | 99.99th=[ 186] 00:34:29.806 bw ( KiB/s): min= 384, max= 1444, per=4.20%, avg=735.95, stdev=254.20, samples=20 00:34:29.806 iops : min= 96, max= 361, avg=183.95, stdev=63.59, samples=20 00:34:29.806 lat (msec) : 10=2.59%, 20=0.86%, 50=10.26%, 100=50.05%, 250=36.23% 00:34:29.806 cpu : usr=36.14%, sys=1.24%, ctx=1361, majf=0, minf=9 00:34:29.807 IO depths : 1=2.0%, 2=4.5%, 4=13.0%, 8=68.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=91.0%, 8=4.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename1: (groupid=0, jobs=1): err= 0: pid=109764: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=160, BW=642KiB/s (658kB/s)(6424KiB/10003msec) 00:34:29.807 slat (usec): min=4, max=8021, avg=18.66, stdev=223.18 00:34:29.807 clat (msec): min=3, max=215, avg=99.52, stdev=31.67 00:34:29.807 lat (msec): min=3, max=215, avg=99.54, stdev=31.66 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 34], 5.00th=[ 57], 10.00th=[ 70], 20.00th=[ 72], 00:34:29.807 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 108], 00:34:29.807 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 155], 00:34:29.807 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 215], 99.95th=[ 215], 00:34:29.807 | 99.99th=[ 215] 00:34:29.807 bw ( KiB/s): min= 512, max= 768, per=3.56%, avg=622.37, stdev=103.85, samples=19 00:34:29.807 iops : min= 128, max= 192, avg=155.58, stdev=25.95, samples=19 00:34:29.807 lat (msec) : 4=1.00%, 50=3.74%, 100=46.89%, 250=48.38% 00:34:29.807 cpu : usr=34.43%, sys=1.06%, ctx=917, majf=0, minf=9 00:34:29.807 IO depths : 1=2.8%, 2=6.1%, 4=15.9%, 8=65.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=91.6%, 8=2.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=1606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename1: (groupid=0, jobs=1): err= 0: pid=109765: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=167, BW=671KiB/s (687kB/s)(6728KiB/10022msec) 00:34:29.807 slat (usec): min=4, max=8026, avg=17.14, stdev=197.11 00:34:29.807 clat (msec): min=34, max=183, avg=95.17, stdev=26.72 00:34:29.807 lat (msec): min=34, max=183, avg=95.19, stdev=26.72 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 68], 20.00th=[ 73], 00:34:29.807 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 95], 60.00th=[ 103], 00:34:29.807 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 136], 95.00th=[ 146], 00:34:29.807 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 184], 00:34:29.807 | 99.99th=[ 184] 00:34:29.807 bw ( KiB/s): min= 512, max= 896, per=3.81%, avg=666.95, stdev=124.40, samples=19 00:34:29.807 iops : min= 128, max= 224, avg=166.74, stdev=31.10, samples=19 00:34:29.807 lat (msec) : 50=2.44%, 100=56.96%, 250=40.61% 00:34:29.807 cpu : usr=41.10%, sys=1.08%, ctx=1290, majf=0, minf=9 00:34:29.807 IO depths : 1=3.5%, 2=7.7%, 4=18.7%, 8=60.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=1682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename1: (groupid=0, jobs=1): err= 0: pid=109766: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=197, BW=791KiB/s (810kB/s)(7936KiB/10038msec) 00:34:29.807 slat (nsec): min=4782, max=34848, avg=10777.58, stdev=3762.18 00:34:29.807 clat (msec): min=10, max=191, avg=80.84, stdev=27.04 00:34:29.807 lat (msec): min=10, max=191, avg=80.85, stdev=27.04 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 18], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:34:29.807 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:34:29.807 | 70.00th=[ 92], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 127], 00:34:29.807 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 192], 99.95th=[ 192], 00:34:29.807 | 99.99th=[ 192] 00:34:29.807 bw ( KiB/s): min= 512, max= 1024, per=4.50%, avg=787.20, stdev=149.40, samples=20 00:34:29.807 iops : min= 128, max= 256, avg=196.80, stdev=37.35, samples=20 00:34:29.807 lat (msec) : 20=1.61%, 50=11.90%, 100=64.31%, 250=22.18% 00:34:29.807 cpu : usr=32.40%, sys=0.87%, ctx=888, majf=0, minf=9 00:34:29.807 IO depths : 1=1.2%, 2=2.5%, 4=8.9%, 8=75.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename2: (groupid=0, jobs=1): err= 0: pid=109767: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=191, BW=765KiB/s (784kB/s)(7680KiB/10036msec) 00:34:29.807 slat (usec): min=3, max=7023, avg=17.10, stdev=184.66 00:34:29.807 clat (msec): min=33, max=152, avg=83.37, stdev=20.56 00:34:29.807 lat (msec): min=33, max=152, avg=83.39, stdev=20.56 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 62], 20.00th=[ 71], 00:34:29.807 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 85], 00:34:29.807 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 122], 00:34:29.807 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 153], 99.95th=[ 153], 00:34:29.807 | 99.99th=[ 153] 00:34:29.807 bw ( KiB/s): min= 512, max= 896, per=4.35%, avg=761.35, stdev=95.00, samples=20 00:34:29.807 iops : min= 128, max= 224, avg=190.30, stdev=23.78, samples=20 00:34:29.807 lat (msec) : 50=4.22%, 100=75.83%, 250=19.95% 00:34:29.807 cpu : usr=43.05%, sys=1.34%, ctx=1227, majf=0, minf=9 00:34:29.807 IO depths : 1=2.6%, 2=5.4%, 4=14.3%, 8=67.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename2: (groupid=0, jobs=1): err= 0: pid=109768: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=166, BW=665KiB/s (681kB/s)(6656KiB/10013msec) 00:34:29.807 slat (usec): min=7, max=8031, avg=16.48, stdev=196.65 00:34:29.807 clat (msec): min=36, max=192, avg=96.15, stdev=29.93 00:34:29.807 lat (msec): min=36, max=192, avg=96.17, stdev=29.93 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 39], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 71], 00:34:29.807 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 94], 60.00th=[ 108], 00:34:29.807 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 146], 00:34:29.807 | 99.00th=[ 159], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:34:29.807 | 99.99th=[ 192] 00:34:29.807 bw ( KiB/s): min= 463, max= 944, per=3.73%, avg=653.42, stdev=153.81, samples=19 00:34:29.807 iops : min= 115, max= 236, avg=163.32, stdev=38.51, samples=19 00:34:29.807 lat (msec) : 50=5.59%, 100=49.76%, 250=44.65% 00:34:29.807 cpu : usr=37.17%, sys=1.33%, ctx=1047, majf=0, minf=9 00:34:29.807 IO depths : 1=2.8%, 2=6.2%, 4=15.7%, 8=65.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename2: (groupid=0, jobs=1): err= 0: pid=109769: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=206, BW=828KiB/s (847kB/s)(8316KiB/10048msec) 00:34:29.807 slat (usec): min=6, max=4023, avg=12.96, stdev=88.15 00:34:29.807 clat (msec): min=4, max=198, avg=77.13, stdev=30.06 00:34:29.807 lat (msec): min=4, max=198, avg=77.15, stdev=30.05 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 10], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 53], 00:34:29.807 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:34:29.807 | 70.00th=[ 86], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 128], 00:34:29.807 | 99.00th=[ 157], 99.50th=[ 176], 99.90th=[ 199], 99.95th=[ 199], 00:34:29.807 | 99.99th=[ 199] 00:34:29.807 bw ( KiB/s): min= 512, max= 1396, per=4.72%, avg=826.75, stdev=234.97, samples=20 00:34:29.807 iops : min= 128, max= 349, avg=206.65, stdev=58.78, samples=20 00:34:29.807 lat (msec) : 10=1.54%, 20=1.54%, 50=15.30%, 100=59.21%, 250=22.41% 00:34:29.807 cpu : usr=44.23%, sys=1.11%, ctx=1316, majf=0, minf=9 00:34:29.807 IO depths : 1=1.5%, 2=3.1%, 4=11.3%, 8=72.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename2: (groupid=0, jobs=1): err= 0: pid=109770: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=205, BW=822KiB/s (842kB/s)(8272KiB/10059msec) 00:34:29.807 slat (usec): min=4, max=8025, avg=19.82, stdev=238.81 00:34:29.807 clat (msec): min=11, max=167, avg=77.60, stdev=23.96 00:34:29.807 lat (msec): min=11, max=167, avg=77.62, stdev=23.95 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 16], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:34:29.807 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:34:29.807 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:34:29.807 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 167], 99.95th=[ 167], 00:34:29.807 | 99.99th=[ 169] 00:34:29.807 bw ( KiB/s): min= 640, max= 1232, per=4.69%, avg=820.80, stdev=154.90, samples=20 00:34:29.807 iops : min= 160, max= 308, avg=205.20, stdev=38.72, samples=20 00:34:29.807 lat (msec) : 20=1.55%, 50=11.12%, 100=72.39%, 250=14.94% 00:34:29.807 cpu : usr=39.55%, sys=1.14%, ctx=1182, majf=0, minf=9 00:34:29.807 IO depths : 1=1.1%, 2=2.2%, 4=9.9%, 8=74.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:34:29.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.807 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.807 filename2: (groupid=0, jobs=1): err= 0: pid=109771: Thu Apr 18 11:21:56 2024 00:34:29.807 read: IOPS=163, BW=652KiB/s (668kB/s)(6540KiB/10023msec) 00:34:29.807 slat (usec): min=3, max=9019, avg=17.29, stdev=222.82 00:34:29.807 clat (msec): min=32, max=188, avg=97.94, stdev=30.15 00:34:29.807 lat (msec): min=32, max=188, avg=97.95, stdev=30.14 00:34:29.807 clat percentiles (msec): 00:34:29.807 | 1.00th=[ 41], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 72], 00:34:29.808 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 100], 60.00th=[ 109], 00:34:29.808 | 70.00th=[ 113], 80.00th=[ 124], 90.00th=[ 140], 95.00th=[ 148], 00:34:29.808 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 190], 00:34:29.808 | 99.99th=[ 190] 00:34:29.808 bw ( KiB/s): min= 384, max= 944, per=3.67%, avg=641.32, stdev=153.80, samples=19 00:34:29.808 iops : min= 96, max= 236, avg=160.32, stdev=38.45, samples=19 00:34:29.808 lat (msec) : 50=5.87%, 100=44.89%, 250=49.24% 00:34:29.808 cpu : usr=37.49%, sys=1.02%, ctx=1346, majf=0, minf=9 00:34:29.808 IO depths : 1=1.4%, 2=3.1%, 4=10.8%, 8=72.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:34:29.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.808 filename2: (groupid=0, jobs=1): err= 0: pid=109772: Thu Apr 18 11:21:56 2024 00:34:29.808 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10032msec) 00:34:29.808 slat (usec): min=4, max=8023, avg=17.57, stdev=205.80 00:34:29.808 clat (msec): min=31, max=151, avg=84.05, stdev=23.74 00:34:29.808 lat (msec): min=31, max=151, avg=84.07, stdev=23.75 00:34:29.808 clat percentiles (msec): 00:34:29.808 | 1.00th=[ 41], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 66], 00:34:29.808 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 86], 00:34:29.808 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 131], 00:34:29.808 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:34:29.808 | 99.99th=[ 153] 00:34:29.808 bw ( KiB/s): min= 512, max= 1120, per=4.24%, avg=742.74, stdev=135.38, samples=19 00:34:29.808 iops : min= 128, max= 280, avg=185.68, stdev=33.84, samples=19 00:34:29.808 lat (msec) : 50=5.25%, 100=69.54%, 250=25.21% 00:34:29.808 cpu : usr=42.18%, sys=1.25%, ctx=1206, majf=0, minf=9 00:34:29.808 IO depths : 1=1.8%, 2=3.8%, 4=11.1%, 8=71.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:34:29.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.808 filename2: (groupid=0, jobs=1): err= 0: pid=109773: Thu Apr 18 11:21:56 2024 00:34:29.808 read: IOPS=222, BW=892KiB/s (913kB/s)(8940KiB/10023msec) 00:34:29.808 slat (usec): min=3, max=8021, avg=20.20, stdev=262.13 00:34:29.808 clat (msec): min=10, max=156, avg=71.57, stdev=21.19 00:34:29.808 lat (msec): min=10, max=156, avg=71.59, stdev=21.19 00:34:29.808 clat percentiles (msec): 00:34:29.808 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 54], 00:34:29.808 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:34:29.808 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:34:29.808 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 157], 00:34:29.808 | 99.99th=[ 157] 00:34:29.808 bw ( KiB/s): min= 688, max= 1280, per=5.10%, avg=891.60, stdev=160.85, samples=20 00:34:29.808 iops : min= 172, max= 320, avg=222.90, stdev=40.21, samples=20 00:34:29.808 lat (msec) : 20=1.43%, 50=16.20%, 100=72.89%, 250=9.49% 00:34:29.808 cpu : usr=42.76%, sys=1.30%, ctx=1364, majf=0, minf=9 00:34:29.808 IO depths : 1=0.9%, 2=1.8%, 4=8.2%, 8=76.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:29.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 complete : 0=0.0%, 4=89.5%, 8=5.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.808 filename2: (groupid=0, jobs=1): err= 0: pid=109774: Thu Apr 18 11:21:56 2024 00:34:29.808 read: IOPS=180, BW=723KiB/s (740kB/s)(7252KiB/10037msec) 00:34:29.808 slat (usec): min=5, max=8035, avg=25.07, stdev=315.51 00:34:29.808 clat (msec): min=38, max=194, avg=88.34, stdev=27.76 00:34:29.808 lat (msec): min=38, max=194, avg=88.37, stdev=27.76 00:34:29.808 clat percentiles (msec): 00:34:29.808 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 63], 00:34:29.808 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 96], 00:34:29.808 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 132], 00:34:29.808 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 194], 00:34:29.808 | 99.99th=[ 194] 00:34:29.808 bw ( KiB/s): min= 512, max= 960, per=4.12%, avg=720.20, stdev=148.37, samples=20 00:34:29.808 iops : min= 128, max= 240, avg=180.05, stdev=37.09, samples=20 00:34:29.808 lat (msec) : 50=9.60%, 100=56.65%, 250=33.76% 00:34:29.808 cpu : usr=33.65%, sys=0.89%, ctx=947, majf=0, minf=9 00:34:29.808 IO depths : 1=1.4%, 2=3.0%, 4=9.5%, 8=73.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:34:29.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.808 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:29.808 00:34:29.808 Run status group 0 (all jobs): 00:34:29.808 READ: bw=17.1MiB/s (17.9MB/s), 638KiB/s-892KiB/s (653kB/s-913kB/s), io=172MiB (180MB), run=10003-10059msec 00:34:29.808 11:21:56 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:29.808 11:21:56 -- target/dif.sh@43 -- # local sub 00:34:29.808 11:21:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.808 11:21:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.808 11:21:56 -- target/dif.sh@36 -- # local sub_id=0 00:34:29.808 11:21:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.808 11:21:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:29.808 11:21:56 -- target/dif.sh@36 -- # local sub_id=1 00:34:29.808 11:21:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.808 11:21:56 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:29.808 11:21:56 -- target/dif.sh@36 -- # local sub_id=2 00:34:29.808 11:21:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:29.808 11:21:56 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:29.808 11:21:56 -- target/dif.sh@115 -- # numjobs=2 00:34:29.808 11:21:56 -- target/dif.sh@115 -- # iodepth=8 00:34:29.808 11:21:56 -- target/dif.sh@115 -- # runtime=5 00:34:29.808 11:21:56 -- target/dif.sh@115 -- # files=1 00:34:29.808 11:21:56 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:29.808 11:21:56 -- target/dif.sh@28 -- # local sub 00:34:29.808 11:21:56 -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.808 11:21:56 -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.808 11:21:56 -- target/dif.sh@18 -- # local sub_id=0 00:34:29.808 11:21:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 bdev_null0 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 [2024-04-18 11:21:56.555492] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.808 11:21:56 -- target/dif.sh@31 -- # create_subsystem 1 00:34:29.808 11:21:56 -- target/dif.sh@18 -- # local sub_id=1 00:34:29.808 11:21:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 bdev_null1 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.808 11:21:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:29.808 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.808 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.808 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.809 11:21:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:29.809 11:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:29.809 11:21:56 -- common/autotest_common.sh@10 -- # set +x 00:34:29.809 11:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:29.809 11:21:56 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:29.809 11:21:56 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:29.809 11:21:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:29.809 11:21:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.809 11:21:56 -- nvmf/common.sh@521 -- # config=() 00:34:29.809 11:21:56 -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.809 11:21:56 -- nvmf/common.sh@521 -- # local subsystem config 00:34:29.809 11:21:56 -- target/dif.sh@54 -- # local file 00:34:29.809 11:21:56 -- target/dif.sh@56 -- # cat 00:34:29.809 11:21:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:29.809 11:21:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:29.809 { 00:34:29.809 "params": { 00:34:29.809 "name": "Nvme$subsystem", 00:34:29.809 "trtype": "$TEST_TRANSPORT", 00:34:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.809 "adrfam": "ipv4", 00:34:29.809 "trsvcid": "$NVMF_PORT", 00:34:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.809 "hdgst": ${hdgst:-false}, 00:34:29.809 "ddgst": ${ddgst:-false} 00:34:29.809 }, 00:34:29.809 "method": "bdev_nvme_attach_controller" 00:34:29.809 } 00:34:29.809 EOF 00:34:29.809 )") 00:34:29.809 11:21:56 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.809 11:21:56 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:29.809 11:21:56 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.809 11:21:56 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:29.809 11:21:56 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:29.809 11:21:56 -- common/autotest_common.sh@1327 -- # shift 00:34:29.809 11:21:56 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:29.809 11:21:56 -- nvmf/common.sh@543 -- # cat 00:34:29.809 11:21:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.809 11:21:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.809 11:21:56 -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.809 11:21:56 -- target/dif.sh@73 -- # cat 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:29.809 11:21:56 -- target/dif.sh@72 -- # (( file++ )) 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:29.809 11:21:56 -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.809 11:21:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:29.809 11:21:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:29.809 { 00:34:29.809 "params": { 00:34:29.809 "name": "Nvme$subsystem", 00:34:29.809 "trtype": "$TEST_TRANSPORT", 00:34:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.809 "adrfam": "ipv4", 00:34:29.809 "trsvcid": "$NVMF_PORT", 00:34:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.809 "hdgst": ${hdgst:-false}, 00:34:29.809 "ddgst": ${ddgst:-false} 00:34:29.809 }, 00:34:29.809 "method": "bdev_nvme_attach_controller" 00:34:29.809 } 00:34:29.809 EOF 00:34:29.809 )") 00:34:29.809 11:21:56 -- nvmf/common.sh@543 -- # cat 00:34:29.809 11:21:56 -- nvmf/common.sh@545 -- # jq . 00:34:29.809 11:21:56 -- nvmf/common.sh@546 -- # IFS=, 00:34:29.809 11:21:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:29.809 "params": { 00:34:29.809 "name": "Nvme0", 00:34:29.809 "trtype": "tcp", 00:34:29.809 "traddr": "10.0.0.2", 00:34:29.809 "adrfam": "ipv4", 00:34:29.809 "trsvcid": "4420", 00:34:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.809 "hdgst": false, 00:34:29.809 "ddgst": false 00:34:29.809 }, 00:34:29.809 "method": "bdev_nvme_attach_controller" 00:34:29.809 },{ 00:34:29.809 "params": { 00:34:29.809 "name": "Nvme1", 00:34:29.809 "trtype": "tcp", 00:34:29.809 "traddr": "10.0.0.2", 00:34:29.809 "adrfam": "ipv4", 00:34:29.809 "trsvcid": "4420", 00:34:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:29.809 "hdgst": false, 00:34:29.809 "ddgst": false 00:34:29.809 }, 00:34:29.809 "method": "bdev_nvme_attach_controller" 00:34:29.809 }' 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:29.809 11:21:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:29.809 11:21:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:29.809 11:21:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:29.809 11:21:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:29.809 11:21:56 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:29.809 11:21:56 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.809 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:29.809 ... 00:34:29.809 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:29.809 ... 00:34:29.809 fio-3.35 00:34:29.809 Starting 4 threads 00:34:33.987 00:34:33.987 filename0: (groupid=0, jobs=1): err= 0: pid=109901: Thu Apr 18 11:22:02 2024 00:34:33.987 read: IOPS=1933, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5002msec) 00:34:33.987 slat (nsec): min=7176, max=40074, avg=10688.78, stdev=4135.70 00:34:33.987 clat (usec): min=1446, max=7216, avg=4090.37, stdev=286.45 00:34:33.987 lat (usec): min=1454, max=7228, avg=4101.05, stdev=286.39 00:34:33.987 clat percentiles (usec): 00:34:33.987 | 1.00th=[ 3720], 5.00th=[ 3982], 10.00th=[ 3982], 20.00th=[ 4015], 00:34:33.987 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4047], 00:34:33.987 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4228], 00:34:33.987 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 6521], 99.95th=[ 6980], 00:34:33.987 | 99.99th=[ 7242] 00:34:33.987 bw ( KiB/s): min=15104, max=15744, per=25.03%, avg=15473.78, stdev=289.38, samples=9 00:34:33.987 iops : min= 1888, max= 1968, avg=1934.22, stdev=36.17, samples=9 00:34:33.987 lat (msec) : 2=0.10%, 4=13.45%, 10=86.45% 00:34:33.987 cpu : usr=94.12%, sys=4.74%, ctx=18, majf=0, minf=0 00:34:33.987 IO depths : 1=8.7%, 2=18.0%, 4=57.0%, 8=16.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.987 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.987 issued rwts: total=9672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.987 filename0: (groupid=0, jobs=1): err= 0: pid=109902: Thu Apr 18 11:22:02 2024 00:34:33.987 read: IOPS=1930, BW=15.1MiB/s (15.8MB/s)(75.4MiB/5002msec) 00:34:33.987 slat (nsec): min=4939, max=74797, avg=14032.22, stdev=4321.28 00:34:33.987 clat (usec): min=2204, max=8565, avg=4071.82, stdev=281.12 00:34:33.987 lat (usec): min=2212, max=8580, avg=4085.85, stdev=281.15 00:34:33.987 clat percentiles (usec): 00:34:33.987 | 1.00th=[ 3916], 5.00th=[ 3949], 10.00th=[ 3949], 20.00th=[ 3982], 00:34:33.987 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4047], 00:34:33.987 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4178], 00:34:33.987 | 99.00th=[ 5407], 99.50th=[ 5932], 99.90th=[ 6390], 99.95th=[ 6587], 00:34:33.987 | 99.99th=[ 8586] 00:34:33.987 bw ( KiB/s): min=14848, max=15744, per=24.96%, avg=15431.11, stdev=327.03, samples=9 00:34:33.987 iops : min= 1856, max= 1968, avg=1928.89, stdev=40.88, samples=9 00:34:33.987 lat (msec) : 4=31.61%, 10=68.39% 00:34:33.987 cpu : usr=93.68%, sys=4.94%, ctx=1018, majf=0, minf=9 00:34:33.987 IO depths : 1=11.8%, 2=25.0%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.987 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.987 issued rwts: total=9656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.987 filename1: (groupid=0, jobs=1): err= 0: pid=109903: Thu Apr 18 11:22:02 2024 00:34:33.987 read: IOPS=1930, BW=15.1MiB/s (15.8MB/s)(75.4MiB/5001msec) 00:34:33.987 slat (nsec): min=7520, max=41940, avg=13839.88, stdev=3575.71 00:34:33.987 clat (usec): min=2124, max=6908, avg=4077.87, stdev=267.02 00:34:33.987 lat (usec): min=2138, max=6922, avg=4091.71, stdev=266.94 00:34:33.987 clat percentiles (usec): 00:34:33.987 | 1.00th=[ 3916], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 3982], 00:34:33.987 | 30.00th=[ 4015], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:34:33.987 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4178], 00:34:33.987 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 6325], 99.95th=[ 6390], 00:34:33.987 | 99.99th=[ 6915] 00:34:33.987 bw ( KiB/s): min=14877, max=15744, per=24.97%, avg=15434.33, stdev=320.65, samples=9 00:34:33.987 iops : min= 1859, max= 1968, avg=1929.22, stdev=40.22, samples=9 00:34:33.987 lat (msec) : 4=26.37%, 10=73.63% 00:34:33.987 cpu : usr=94.70%, sys=4.20%, ctx=8, majf=0, minf=10 00:34:33.987 IO depths : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.987 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.987 issued rwts: total=9656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.987 filename1: (groupid=0, jobs=1): err= 0: pid=109904: Thu Apr 18 11:22:02 2024 00:34:33.987 read: IOPS=1933, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5003msec) 00:34:33.987 slat (nsec): min=4726, max=53759, avg=9122.41, stdev=3060.65 00:34:33.987 clat (usec): min=1235, max=6143, avg=4091.95, stdev=313.83 00:34:33.987 lat (usec): min=1245, max=6157, avg=4101.07, stdev=313.83 00:34:33.987 clat percentiles (usec): 00:34:33.987 | 1.00th=[ 3818], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4015], 00:34:33.987 | 30.00th=[ 4047], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4047], 00:34:33.987 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4228], 00:34:33.987 | 99.00th=[ 5473], 99.50th=[ 5932], 99.90th=[ 5997], 99.95th=[ 6063], 00:34:33.987 | 99.99th=[ 6128] 00:34:33.987 bw ( KiB/s): min=15104, max=15744, per=25.01%, avg=15459.56, stdev=269.85, samples=9 00:34:33.988 iops : min= 1888, max= 1968, avg=1932.44, stdev=33.73, samples=9 00:34:33.988 lat (msec) : 2=0.08%, 4=8.00%, 10=91.91% 00:34:33.988 cpu : usr=93.82%, sys=4.98%, ctx=39, majf=0, minf=0 00:34:33.988 IO depths : 1=11.1%, 2=25.0%, 4=50.0%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.988 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.988 issued rwts: total=9672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:33.988 00:34:33.988 Run status group 0 (all jobs): 00:34:33.988 READ: bw=60.4MiB/s (63.3MB/s), 15.1MiB/s-15.1MiB/s (15.8MB/s-15.8MB/s), io=302MiB (317MB), run=5001-5003msec 00:34:34.247 11:22:02 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:34.247 11:22:02 -- target/dif.sh@43 -- # local sub 00:34:34.247 11:22:02 -- target/dif.sh@45 -- # for sub in "$@" 00:34:34.247 11:22:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:34.247 11:22:02 -- target/dif.sh@36 -- # local sub_id=0 00:34:34.247 11:22:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 11:22:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 11:22:02 -- target/dif.sh@45 -- # for sub in "$@" 00:34:34.247 11:22:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:34.247 11:22:02 -- target/dif.sh@36 -- # local sub_id=1 00:34:34.247 11:22:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 11:22:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 ************************************ 00:34:34.247 END TEST fio_dif_rand_params 00:34:34.247 ************************************ 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 00:34:34.247 real 0m23.628s 00:34:34.247 user 2m6.447s 00:34:34.247 sys 0m5.397s 00:34:34.247 11:22:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 11:22:02 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:34.247 11:22:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:34.247 11:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 ************************************ 00:34:34.247 START TEST fio_dif_digest 00:34:34.247 ************************************ 00:34:34.247 11:22:02 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:34:34.247 11:22:02 -- target/dif.sh@123 -- # local NULL_DIF 00:34:34.247 11:22:02 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:34.247 11:22:02 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:34.247 11:22:02 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:34.247 11:22:02 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:34.247 11:22:02 -- target/dif.sh@127 -- # numjobs=3 00:34:34.247 11:22:02 -- target/dif.sh@127 -- # iodepth=3 00:34:34.247 11:22:02 -- target/dif.sh@127 -- # runtime=10 00:34:34.247 11:22:02 -- target/dif.sh@128 -- # hdgst=true 00:34:34.247 11:22:02 -- target/dif.sh@128 -- # ddgst=true 00:34:34.247 11:22:02 -- target/dif.sh@130 -- # create_subsystems 0 00:34:34.247 11:22:02 -- target/dif.sh@28 -- # local sub 00:34:34.247 11:22:02 -- target/dif.sh@30 -- # for sub in "$@" 00:34:34.247 11:22:02 -- target/dif.sh@31 -- # create_subsystem 0 00:34:34.247 11:22:02 -- target/dif.sh@18 -- # local sub_id=0 00:34:34.247 11:22:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 bdev_null0 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 11:22:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 11:22:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 11:22:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:34.247 11:22:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:34.247 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:34:34.247 [2024-04-18 11:22:02.852912] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.247 11:22:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:34.247 11:22:02 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:34.247 11:22:02 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:34.247 11:22:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:34.247 11:22:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.247 11:22:02 -- target/dif.sh@82 -- # gen_fio_conf 00:34:34.247 11:22:02 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.247 11:22:02 -- target/dif.sh@54 -- # local file 00:34:34.247 11:22:02 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:34.247 11:22:02 -- target/dif.sh@56 -- # cat 00:34:34.247 11:22:02 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:34.247 11:22:02 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:34.247 11:22:02 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:34.247 11:22:02 -- common/autotest_common.sh@1327 -- # shift 00:34:34.247 11:22:02 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:34.247 11:22:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.247 11:22:02 -- nvmf/common.sh@521 -- # config=() 00:34:34.247 11:22:02 -- nvmf/common.sh@521 -- # local subsystem config 00:34:34.247 11:22:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:34.247 11:22:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:34.247 { 00:34:34.247 "params": { 00:34:34.247 "name": "Nvme$subsystem", 00:34:34.247 "trtype": "$TEST_TRANSPORT", 00:34:34.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:34.247 "adrfam": "ipv4", 00:34:34.247 "trsvcid": "$NVMF_PORT", 00:34:34.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:34.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:34.247 "hdgst": ${hdgst:-false}, 00:34:34.247 "ddgst": ${ddgst:-false} 00:34:34.247 }, 00:34:34.247 "method": "bdev_nvme_attach_controller" 00:34:34.247 } 00:34:34.247 EOF 00:34:34.247 )") 00:34:34.247 11:22:02 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:34.247 11:22:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:34.247 11:22:02 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:34.247 11:22:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:34.247 11:22:02 -- target/dif.sh@72 -- # (( file <= files )) 00:34:34.247 11:22:02 -- nvmf/common.sh@543 -- # cat 00:34:34.247 11:22:02 -- nvmf/common.sh@545 -- # jq . 00:34:34.247 11:22:02 -- nvmf/common.sh@546 -- # IFS=, 00:34:34.247 11:22:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:34.247 "params": { 00:34:34.247 "name": "Nvme0", 00:34:34.247 "trtype": "tcp", 00:34:34.247 "traddr": "10.0.0.2", 00:34:34.247 "adrfam": "ipv4", 00:34:34.247 "trsvcid": "4420", 00:34:34.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:34.247 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:34.247 "hdgst": true, 00:34:34.247 "ddgst": true 00:34:34.247 }, 00:34:34.247 "method": "bdev_nvme_attach_controller" 00:34:34.247 }' 00:34:34.506 11:22:02 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:34.506 11:22:02 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:34.506 11:22:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.506 11:22:02 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:34.506 11:22:02 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:34:34.506 11:22:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:34.506 11:22:02 -- common/autotest_common.sh@1331 -- # asan_lib= 00:34:34.506 11:22:02 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:34:34.506 11:22:02 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:34.506 11:22:02 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.506 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:34.506 ... 00:34:34.506 fio-3.35 00:34:34.506 Starting 3 threads 00:34:46.701 00:34:46.701 filename0: (groupid=0, jobs=1): err= 0: pid=110014: Thu Apr 18 11:22:13 2024 00:34:46.701 read: IOPS=243, BW=30.5MiB/s (31.9MB/s)(305MiB/10006msec) 00:34:46.701 slat (nsec): min=7743, max=53497, avg=13958.00, stdev=3384.31 00:34:46.701 clat (usec): min=8797, max=53911, avg=12295.06, stdev=2590.24 00:34:46.701 lat (usec): min=8810, max=53926, avg=12309.02, stdev=2590.32 00:34:46.701 clat percentiles (usec): 00:34:46.701 | 1.00th=[10159], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:34:46.701 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:34:46.701 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:34:46.701 | 99.00th=[14353], 99.50th=[15008], 99.90th=[53216], 99.95th=[53740], 00:34:46.701 | 99.99th=[53740] 00:34:46.701 bw ( KiB/s): min=29184, max=32256, per=38.28%, avg=31188.26, stdev=972.98, samples=19 00:34:46.701 iops : min= 228, max= 252, avg=243.63, stdev= 7.60, samples=19 00:34:46.701 lat (msec) : 10=0.74%, 20=98.89%, 100=0.37% 00:34:46.701 cpu : usr=91.92%, sys=6.54%, ctx=33, majf=0, minf=9 00:34:46.701 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.701 issued rwts: total=2438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:46.701 filename0: (groupid=0, jobs=1): err= 0: pid=110015: Thu Apr 18 11:22:13 2024 00:34:46.701 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(218MiB/10007msec) 00:34:46.701 slat (nsec): min=7125, max=40735, avg=10468.18, stdev=3701.99 00:34:46.701 clat (usec): min=7332, max=19353, avg=17196.22, stdev=1200.70 00:34:46.701 lat (usec): min=7340, max=19367, avg=17206.69, stdev=1200.70 00:34:46.701 clat percentiles (usec): 00:34:46.701 | 1.00th=[10552], 5.00th=[15926], 10.00th=[16319], 20.00th=[16712], 00:34:46.701 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17171], 60.00th=[17433], 00:34:46.701 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:34:46.701 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:34:46.701 | 99.99th=[19268] 00:34:46.701 bw ( KiB/s): min=21504, max=23808, per=27.34%, avg=22274.26, stdev=569.30, samples=19 00:34:46.701 iops : min= 168, max= 186, avg=174.00, stdev= 4.47, samples=19 00:34:46.701 lat (msec) : 10=0.17%, 20=99.83% 00:34:46.701 cpu : usr=93.34%, sys=5.51%, ctx=7, majf=0, minf=9 00:34:46.701 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.701 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:46.701 filename0: (groupid=0, jobs=1): err= 0: pid=110016: Thu Apr 18 11:22:13 2024 00:34:46.701 read: IOPS=218, BW=27.3MiB/s (28.7MB/s)(274MiB/10005msec) 00:34:46.701 slat (nsec): min=4862, max=43046, avg=13659.15, stdev=3801.14 00:34:46.701 clat (usec): min=6995, max=17429, avg=13698.47, stdev=1237.44 00:34:46.701 lat (usec): min=7009, max=17444, avg=13712.13, stdev=1237.36 00:34:46.701 clat percentiles (usec): 00:34:46.701 | 1.00th=[ 8586], 5.00th=[11863], 10.00th=[12387], 20.00th=[12780], 00:34:46.701 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:34:46.701 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:34:46.701 | 99.00th=[16188], 99.50th=[16581], 99.90th=[16909], 99.95th=[16909], 00:34:46.701 | 99.99th=[17433] 00:34:46.701 bw ( KiB/s): min=26880, max=29696, per=34.38%, avg=28011.79, stdev=720.63, samples=19 00:34:46.701 iops : min= 210, max= 232, avg=218.84, stdev= 5.63, samples=19 00:34:46.701 lat (msec) : 10=1.83%, 20=98.17% 00:34:46.701 cpu : usr=92.20%, sys=6.39%, ctx=5, majf=0, minf=9 00:34:46.701 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.701 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:46.701 00:34:46.701 Run status group 0 (all jobs): 00:34:46.701 READ: bw=79.6MiB/s (83.4MB/s), 21.8MiB/s-30.5MiB/s (22.8MB/s-31.9MB/s), io=796MiB (835MB), run=10005-10007msec 00:34:46.701 11:22:13 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:46.701 11:22:13 -- target/dif.sh@43 -- # local sub 00:34:46.701 11:22:13 -- target/dif.sh@45 -- # for sub in "$@" 00:34:46.701 11:22:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:46.701 11:22:13 -- target/dif.sh@36 -- # local sub_id=0 00:34:46.701 11:22:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:46.701 11:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.701 11:22:13 -- common/autotest_common.sh@10 -- # set +x 00:34:46.701 11:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.701 11:22:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:46.701 11:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:46.701 11:22:13 -- common/autotest_common.sh@10 -- # set +x 00:34:46.701 ************************************ 00:34:46.701 END TEST fio_dif_digest 00:34:46.701 ************************************ 00:34:46.701 11:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:46.701 00:34:46.701 real 0m10.997s 00:34:46.701 user 0m28.386s 00:34:46.701 sys 0m2.118s 00:34:46.701 11:22:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:46.701 11:22:13 -- common/autotest_common.sh@10 -- # set +x 00:34:46.701 11:22:13 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:46.701 11:22:13 -- target/dif.sh@147 -- # nvmftestfini 00:34:46.701 11:22:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:46.701 11:22:13 -- nvmf/common.sh@117 -- # sync 00:34:46.701 11:22:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:46.701 11:22:13 -- nvmf/common.sh@120 -- # set +e 00:34:46.701 11:22:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:46.701 11:22:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:46.701 rmmod nvme_tcp 00:34:46.701 rmmod nvme_fabrics 00:34:46.701 rmmod nvme_keyring 00:34:46.701 11:22:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:46.701 11:22:13 -- nvmf/common.sh@124 -- # set -e 00:34:46.702 11:22:13 -- nvmf/common.sh@125 -- # return 0 00:34:46.702 11:22:13 -- nvmf/common.sh@478 -- # '[' -n 109242 ']' 00:34:46.702 11:22:13 -- nvmf/common.sh@479 -- # killprocess 109242 00:34:46.702 11:22:13 -- common/autotest_common.sh@936 -- # '[' -z 109242 ']' 00:34:46.702 11:22:13 -- common/autotest_common.sh@940 -- # kill -0 109242 00:34:46.702 11:22:13 -- common/autotest_common.sh@941 -- # uname 00:34:46.702 11:22:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:46.702 11:22:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 109242 00:34:46.702 killing process with pid 109242 00:34:46.702 11:22:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:46.702 11:22:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:46.702 11:22:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 109242' 00:34:46.702 11:22:13 -- common/autotest_common.sh@955 -- # kill 109242 00:34:46.702 11:22:13 -- common/autotest_common.sh@960 -- # wait 109242 00:34:46.702 11:22:14 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:34:46.702 11:22:14 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:46.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:46.702 Waiting for block devices as requested 00:34:46.702 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:46.702 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:46.702 11:22:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:46.702 11:22:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:46.702 11:22:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:46.702 11:22:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:46.702 11:22:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.702 11:22:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.702 11:22:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.702 11:22:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:46.702 ************************************ 00:34:46.702 END TEST nvmf_dif 00:34:46.702 ************************************ 00:34:46.702 00:34:46.702 real 1m0.110s 00:34:46.702 user 3m51.761s 00:34:46.702 sys 0m15.210s 00:34:46.702 11:22:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:46.702 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:34:46.702 11:22:14 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:46.702 11:22:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:46.702 11:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:46.702 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:34:46.702 ************************************ 00:34:46.702 START TEST nvmf_abort_qd_sizes 00:34:46.702 ************************************ 00:34:46.702 11:22:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:46.702 * Looking for test storage... 00:34:46.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:46.702 11:22:14 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:46.702 11:22:14 -- nvmf/common.sh@7 -- # uname -s 00:34:46.702 11:22:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.702 11:22:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.702 11:22:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.702 11:22:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.702 11:22:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.702 11:22:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.702 11:22:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.702 11:22:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.702 11:22:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.702 11:22:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.702 11:22:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:34:46.702 11:22:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:34:46.702 11:22:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.702 11:22:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.702 11:22:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:46.702 11:22:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.702 11:22:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:46.702 11:22:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.702 11:22:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.702 11:22:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.702 11:22:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.702 11:22:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.702 11:22:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.702 11:22:15 -- paths/export.sh@5 -- # export PATH 00:34:46.702 11:22:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.702 11:22:15 -- nvmf/common.sh@47 -- # : 0 00:34:46.702 11:22:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:46.702 11:22:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:46.702 11:22:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.702 11:22:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.702 11:22:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.702 11:22:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:46.702 11:22:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:46.702 11:22:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:46.702 11:22:15 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:46.702 11:22:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:34:46.702 11:22:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.702 11:22:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:34:46.702 11:22:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:34:46.702 11:22:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:34:46.702 11:22:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.702 11:22:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.702 11:22:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.702 11:22:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:34:46.702 11:22:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:34:46.702 11:22:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:34:46.702 11:22:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:34:46.702 11:22:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:34:46.702 11:22:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:34:46.702 11:22:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.702 11:22:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.702 11:22:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:46.702 11:22:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:46.702 11:22:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:46.702 11:22:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:46.702 11:22:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:46.702 11:22:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.702 11:22:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:46.702 11:22:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:46.702 11:22:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:46.702 11:22:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:46.702 11:22:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:46.702 11:22:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:46.702 Cannot find device "nvmf_tgt_br" 00:34:46.702 11:22:15 -- nvmf/common.sh@155 -- # true 00:34:46.702 11:22:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:46.702 Cannot find device "nvmf_tgt_br2" 00:34:46.702 11:22:15 -- nvmf/common.sh@156 -- # true 00:34:46.702 11:22:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:46.702 11:22:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:46.702 Cannot find device "nvmf_tgt_br" 00:34:46.702 11:22:15 -- nvmf/common.sh@158 -- # true 00:34:46.702 11:22:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:46.702 Cannot find device "nvmf_tgt_br2" 00:34:46.702 11:22:15 -- nvmf/common.sh@159 -- # true 00:34:46.702 11:22:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:46.702 11:22:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:46.702 11:22:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:46.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:46.702 11:22:15 -- nvmf/common.sh@162 -- # true 00:34:46.702 11:22:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:46.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:46.702 11:22:15 -- nvmf/common.sh@163 -- # true 00:34:46.702 11:22:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:46.702 11:22:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:46.702 11:22:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:46.702 11:22:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:46.702 11:22:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:46.702 11:22:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:46.702 11:22:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:46.702 11:22:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:46.702 11:22:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:46.702 11:22:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:46.702 11:22:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:46.702 11:22:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:46.703 11:22:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:46.703 11:22:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:46.703 11:22:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:46.703 11:22:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:46.703 11:22:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:46.703 11:22:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:46.703 11:22:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:46.703 11:22:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:46.703 11:22:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:46.703 11:22:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:46.703 11:22:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:46.703 11:22:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:46.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:34:46.703 00:34:46.703 --- 10.0.0.2 ping statistics --- 00:34:46.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.703 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:34:46.703 11:22:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:46.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:46.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:34:46.703 00:34:46.703 --- 10.0.0.3 ping statistics --- 00:34:46.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.703 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:34:46.703 11:22:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:46.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:34:46.703 00:34:46.703 --- 10.0.0.1 ping statistics --- 00:34:46.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.703 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:34:46.703 11:22:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.703 11:22:15 -- nvmf/common.sh@422 -- # return 0 00:34:46.703 11:22:15 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:34:46.703 11:22:15 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:47.639 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:47.639 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:47.639 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:34:47.639 11:22:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.639 11:22:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:34:47.639 11:22:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:34:47.639 11:22:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.639 11:22:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:34:47.639 11:22:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:34:47.639 11:22:16 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:47.639 11:22:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:34:47.639 11:22:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:47.639 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:34:47.639 11:22:16 -- nvmf/common.sh@470 -- # nvmfpid=110605 00:34:47.639 11:22:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:47.639 11:22:16 -- nvmf/common.sh@471 -- # waitforlisten 110605 00:34:47.639 11:22:16 -- common/autotest_common.sh@817 -- # '[' -z 110605 ']' 00:34:47.639 11:22:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.639 11:22:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:47.639 11:22:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.639 11:22:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:47.639 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:34:47.898 [2024-04-18 11:22:16.285554] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:34:47.898 [2024-04-18 11:22:16.286312] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.898 [2024-04-18 11:22:16.436693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:48.157 [2024-04-18 11:22:16.541391] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.157 [2024-04-18 11:22:16.541740] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.157 [2024-04-18 11:22:16.541913] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.157 [2024-04-18 11:22:16.542083] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.157 [2024-04-18 11:22:16.542136] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.157 [2024-04-18 11:22:16.542348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.157 [2024-04-18 11:22:16.545089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.157 [2024-04-18 11:22:16.545275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:48.157 [2024-04-18 11:22:16.545283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.724 11:22:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:48.724 11:22:17 -- common/autotest_common.sh@850 -- # return 0 00:34:48.724 11:22:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:34:48.724 11:22:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:48.724 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.724 11:22:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.724 11:22:17 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:48.724 11:22:17 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:48.724 11:22:17 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:48.724 11:22:17 -- scripts/common.sh@309 -- # local bdf bdfs 00:34:48.724 11:22:17 -- scripts/common.sh@310 -- # local nvmes 00:34:48.724 11:22:17 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:34:48.724 11:22:17 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:34:48.724 11:22:17 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:34:48.724 11:22:17 -- scripts/common.sh@295 -- # local bdf= 00:34:48.724 11:22:17 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:34:48.724 11:22:17 -- scripts/common.sh@230 -- # local class 00:34:48.724 11:22:17 -- scripts/common.sh@231 -- # local subclass 00:34:48.724 11:22:17 -- scripts/common.sh@232 -- # local progif 00:34:48.724 11:22:17 -- scripts/common.sh@233 -- # printf %02x 1 00:34:48.724 11:22:17 -- scripts/common.sh@233 -- # class=01 00:34:48.724 11:22:17 -- scripts/common.sh@234 -- # printf %02x 8 00:34:48.724 11:22:17 -- scripts/common.sh@234 -- # subclass=08 00:34:48.724 11:22:17 -- scripts/common.sh@235 -- # printf %02x 2 00:34:48.724 11:22:17 -- scripts/common.sh@235 -- # progif=02 00:34:48.724 11:22:17 -- scripts/common.sh@237 -- # hash lspci 00:34:48.724 11:22:17 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:34:48.724 11:22:17 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:34:48.724 11:22:17 -- scripts/common.sh@240 -- # grep -i -- -p02 00:34:48.724 11:22:17 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:34:48.724 11:22:17 -- scripts/common.sh@242 -- # tr -d '"' 00:34:48.724 11:22:17 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:34:48.724 11:22:17 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:34:48.724 11:22:17 -- scripts/common.sh@15 -- # local i 00:34:48.724 11:22:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:34:48.724 11:22:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:34:48.724 11:22:17 -- scripts/common.sh@24 -- # return 0 00:34:48.724 11:22:17 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:34:48.724 11:22:17 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:34:48.724 11:22:17 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:34:48.724 11:22:17 -- scripts/common.sh@15 -- # local i 00:34:48.724 11:22:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:34:48.724 11:22:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:34:48.724 11:22:17 -- scripts/common.sh@24 -- # return 0 00:34:48.724 11:22:17 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:34:48.724 11:22:17 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:34:48.724 11:22:17 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:34:48.983 11:22:17 -- scripts/common.sh@320 -- # uname -s 00:34:48.983 11:22:17 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:34:48.983 11:22:17 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:34:48.983 11:22:17 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:34:48.983 11:22:17 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:34:48.983 11:22:17 -- scripts/common.sh@320 -- # uname -s 00:34:48.983 11:22:17 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:34:48.983 11:22:17 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:34:48.983 11:22:17 -- scripts/common.sh@325 -- # (( 2 )) 00:34:48.983 11:22:17 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:48.983 11:22:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:48.983 11:22:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:48.983 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.983 ************************************ 00:34:48.983 START TEST spdk_target_abort 00:34:48.983 ************************************ 00:34:48.983 11:22:17 -- common/autotest_common.sh@1111 -- # spdk_target 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:34:48.983 11:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:48.983 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.983 spdk_targetn1 00:34:48.983 11:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:48.983 11:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:48.983 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.983 [2024-04-18 11:22:17.530520] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.983 11:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:48.983 11:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:48.983 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.983 11:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:48.983 11:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:48.983 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.983 11:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:48.983 11:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:48.983 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.983 [2024-04-18 11:22:17.558664] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.983 11:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.983 11:22:17 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:52.275 Initializing NVMe Controllers 00:34:52.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:52.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:52.275 Initialization complete. Launching workers. 00:34:52.275 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10919, failed: 0 00:34:52.275 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1092, failed to submit 9827 00:34:52.275 success 758, unsuccess 334, failed 0 00:34:52.275 11:22:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:52.275 11:22:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.561 Initializing NVMe Controllers 00:34:55.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:55.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:55.561 Initialization complete. Launching workers. 00:34:55.561 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5967, failed: 0 00:34:55.561 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1268, failed to submit 4699 00:34:55.561 success 229, unsuccess 1039, failed 0 00:34:55.561 11:22:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:55.561 11:22:24 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:58.849 Initializing NVMe Controllers 00:34:58.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:58.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:58.849 Initialization complete. Launching workers. 00:34:58.849 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31007, failed: 0 00:34:58.849 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2648, failed to submit 28359 00:34:58.849 success 362, unsuccess 2286, failed 0 00:34:58.849 11:22:27 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:58.849 11:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:58.849 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:34:58.849 11:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:58.849 11:22:27 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:58.849 11:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:58.849 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:34:59.413 11:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:59.413 11:22:27 -- target/abort_qd_sizes.sh@61 -- # killprocess 110605 00:34:59.413 11:22:27 -- common/autotest_common.sh@936 -- # '[' -z 110605 ']' 00:34:59.413 11:22:27 -- common/autotest_common.sh@940 -- # kill -0 110605 00:34:59.413 11:22:27 -- common/autotest_common.sh@941 -- # uname 00:34:59.413 11:22:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:59.413 11:22:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110605 00:34:59.413 killing process with pid 110605 00:34:59.413 11:22:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:59.413 11:22:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:59.413 11:22:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110605' 00:34:59.413 11:22:27 -- common/autotest_common.sh@955 -- # kill 110605 00:34:59.413 11:22:27 -- common/autotest_common.sh@960 -- # wait 110605 00:34:59.671 ************************************ 00:34:59.671 END TEST spdk_target_abort 00:34:59.671 ************************************ 00:34:59.671 00:34:59.671 real 0m10.680s 00:34:59.671 user 0m43.922s 00:34:59.671 sys 0m1.793s 00:34:59.671 11:22:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:59.671 11:22:28 -- common/autotest_common.sh@10 -- # set +x 00:34:59.671 11:22:28 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:59.671 11:22:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:59.671 11:22:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:59.671 11:22:28 -- common/autotest_common.sh@10 -- # set +x 00:34:59.671 ************************************ 00:34:59.671 START TEST kernel_target_abort 00:34:59.671 ************************************ 00:34:59.671 11:22:28 -- common/autotest_common.sh@1111 -- # kernel_target 00:34:59.671 11:22:28 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:59.671 11:22:28 -- nvmf/common.sh@717 -- # local ip 00:34:59.671 11:22:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:34:59.671 11:22:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:34:59.671 11:22:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.671 11:22:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.671 11:22:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:34:59.671 11:22:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.671 11:22:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:34:59.671 11:22:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:34:59.671 11:22:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:34:59.671 11:22:28 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:59.671 11:22:28 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:59.671 11:22:28 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:34:59.671 11:22:28 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:59.671 11:22:28 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:59.671 11:22:28 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:59.671 11:22:28 -- nvmf/common.sh@628 -- # local block nvme 00:34:59.671 11:22:28 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:34:59.671 11:22:28 -- nvmf/common.sh@631 -- # modprobe nvmet 00:34:59.671 11:22:28 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:59.671 11:22:28 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:00.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:00.236 Waiting for block devices as requested 00:35:00.236 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:00.236 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:00.236 11:22:28 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:35:00.236 11:22:28 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:00.236 11:22:28 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:35:00.236 11:22:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:00.236 11:22:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:00.236 11:22:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:00.236 11:22:28 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:35:00.236 11:22:28 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:00.236 11:22:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:00.236 No valid GPT data, bailing 00:35:00.236 11:22:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:00.494 11:22:28 -- scripts/common.sh@391 -- # pt= 00:35:00.494 11:22:28 -- scripts/common.sh@392 -- # return 1 00:35:00.494 11:22:28 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:35:00.494 11:22:28 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:35:00.494 11:22:28 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:35:00.494 11:22:28 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:35:00.494 11:22:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:35:00.494 11:22:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:35:00.494 11:22:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:00.494 11:22:28 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:35:00.494 11:22:28 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:35:00.494 11:22:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:35:00.494 No valid GPT data, bailing 00:35:00.494 11:22:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:35:00.494 11:22:28 -- scripts/common.sh@391 -- # pt= 00:35:00.494 11:22:28 -- scripts/common.sh@392 -- # return 1 00:35:00.494 11:22:28 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:35:00.494 11:22:28 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:35:00.494 11:22:28 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:35:00.494 11:22:28 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:35:00.494 11:22:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:35:00.494 11:22:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:35:00.494 11:22:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:00.494 11:22:28 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:35:00.494 11:22:28 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:35:00.494 11:22:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:35:00.494 No valid GPT data, bailing 00:35:00.494 11:22:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:35:00.494 11:22:29 -- scripts/common.sh@391 -- # pt= 00:35:00.494 11:22:29 -- scripts/common.sh@392 -- # return 1 00:35:00.494 11:22:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:35:00.494 11:22:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:35:00.494 11:22:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:00.494 11:22:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:35:00.494 11:22:29 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:35:00.494 11:22:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:00.494 11:22:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:00.494 11:22:29 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:35:00.494 11:22:29 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:35:00.494 11:22:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:00.494 No valid GPT data, bailing 00:35:00.494 11:22:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:00.494 11:22:29 -- scripts/common.sh@391 -- # pt= 00:35:00.494 11:22:29 -- scripts/common.sh@392 -- # return 1 00:35:00.494 11:22:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:35:00.494 11:22:29 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:35:00.494 11:22:29 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:00.494 11:22:29 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:00.494 11:22:29 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:00.494 11:22:29 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:00.494 11:22:29 -- nvmf/common.sh@656 -- # echo 1 00:35:00.495 11:22:29 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:35:00.495 11:22:29 -- nvmf/common.sh@658 -- # echo 1 00:35:00.495 11:22:29 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:35:00.495 11:22:29 -- nvmf/common.sh@661 -- # echo tcp 00:35:00.495 11:22:29 -- nvmf/common.sh@662 -- # echo 4420 00:35:00.495 11:22:29 -- nvmf/common.sh@663 -- # echo ipv4 00:35:00.495 11:22:29 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:00.495 11:22:29 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 --hostid=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 -a 10.0.0.1 -t tcp -s 4420 00:35:00.753 00:35:00.753 Discovery Log Number of Records 2, Generation counter 2 00:35:00.753 =====Discovery Log Entry 0====== 00:35:00.753 trtype: tcp 00:35:00.753 adrfam: ipv4 00:35:00.753 subtype: current discovery subsystem 00:35:00.753 treq: not specified, sq flow control disable supported 00:35:00.753 portid: 1 00:35:00.753 trsvcid: 4420 00:35:00.753 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:00.753 traddr: 10.0.0.1 00:35:00.753 eflags: none 00:35:00.753 sectype: none 00:35:00.753 =====Discovery Log Entry 1====== 00:35:00.753 trtype: tcp 00:35:00.753 adrfam: ipv4 00:35:00.753 subtype: nvme subsystem 00:35:00.753 treq: not specified, sq flow control disable supported 00:35:00.753 portid: 1 00:35:00.753 trsvcid: 4420 00:35:00.753 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:00.753 traddr: 10.0.0.1 00:35:00.753 eflags: none 00:35:00.753 sectype: none 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:00.753 11:22:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:04.036 Initializing NVMe Controllers 00:35:04.036 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:04.036 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:04.036 Initialization complete. Launching workers. 00:35:04.036 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32853, failed: 0 00:35:04.036 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32853, failed to submit 0 00:35:04.036 success 0, unsuccess 32853, failed 0 00:35:04.036 11:22:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:04.036 11:22:32 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:07.319 Initializing NVMe Controllers 00:35:07.319 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:07.319 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:07.319 Initialization complete. Launching workers. 00:35:07.319 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71592, failed: 0 00:35:07.319 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31394, failed to submit 40198 00:35:07.319 success 0, unsuccess 31394, failed 0 00:35:07.319 11:22:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:07.319 11:22:35 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:10.624 Initializing NVMe Controllers 00:35:10.624 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:10.624 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:10.624 Initialization complete. Launching workers. 00:35:10.624 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82606, failed: 0 00:35:10.624 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20682, failed to submit 61924 00:35:10.624 success 0, unsuccess 20682, failed 0 00:35:10.624 11:22:38 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:10.624 11:22:38 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:10.624 11:22:38 -- nvmf/common.sh@675 -- # echo 0 00:35:10.624 11:22:38 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:10.624 11:22:38 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:10.624 11:22:38 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:10.624 11:22:38 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:10.624 11:22:38 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:35:10.624 11:22:38 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:35:10.624 11:22:38 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:10.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:11.822 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:11.822 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:12.081 00:35:12.081 real 0m12.253s 00:35:12.081 user 0m6.280s 00:35:12.081 sys 0m3.351s 00:35:12.081 11:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:12.081 11:22:40 -- common/autotest_common.sh@10 -- # set +x 00:35:12.081 ************************************ 00:35:12.081 END TEST kernel_target_abort 00:35:12.081 ************************************ 00:35:12.081 11:22:40 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:12.081 11:22:40 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:12.081 11:22:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:35:12.081 11:22:40 -- nvmf/common.sh@117 -- # sync 00:35:12.081 11:22:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:12.081 11:22:40 -- nvmf/common.sh@120 -- # set +e 00:35:12.081 11:22:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:12.081 11:22:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:12.081 rmmod nvme_tcp 00:35:12.081 rmmod nvme_fabrics 00:35:12.081 rmmod nvme_keyring 00:35:12.081 11:22:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:12.081 11:22:40 -- nvmf/common.sh@124 -- # set -e 00:35:12.081 11:22:40 -- nvmf/common.sh@125 -- # return 0 00:35:12.081 11:22:40 -- nvmf/common.sh@478 -- # '[' -n 110605 ']' 00:35:12.081 11:22:40 -- nvmf/common.sh@479 -- # killprocess 110605 00:35:12.081 11:22:40 -- common/autotest_common.sh@936 -- # '[' -z 110605 ']' 00:35:12.081 11:22:40 -- common/autotest_common.sh@940 -- # kill -0 110605 00:35:12.081 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (110605) - No such process 00:35:12.081 Process with pid 110605 is not found 00:35:12.081 11:22:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 110605 is not found' 00:35:12.081 11:22:40 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:35:12.081 11:22:40 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:12.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:12.340 Waiting for block devices as requested 00:35:12.599 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:12.599 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:12.599 11:22:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:35:12.599 11:22:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:35:12.599 11:22:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:12.599 11:22:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:12.599 11:22:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.599 11:22:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:12.599 11:22:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.599 11:22:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:12.599 00:35:12.599 real 0m26.326s 00:35:12.599 user 0m51.424s 00:35:12.599 sys 0m6.520s 00:35:12.599 11:22:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:12.599 ************************************ 00:35:12.599 11:22:41 -- common/autotest_common.sh@10 -- # set +x 00:35:12.599 END TEST nvmf_abort_qd_sizes 00:35:12.599 ************************************ 00:35:12.858 11:22:41 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:35:12.858 11:22:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:12.858 11:22:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:12.858 11:22:41 -- common/autotest_common.sh@10 -- # set +x 00:35:12.858 ************************************ 00:35:12.858 START TEST keyring_file 00:35:12.858 ************************************ 00:35:12.858 11:22:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:35:12.858 * Looking for test storage... 00:35:12.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:35:12.858 11:22:41 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:35:12.858 11:22:41 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:12.858 11:22:41 -- nvmf/common.sh@7 -- # uname -s 00:35:12.858 11:22:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:12.858 11:22:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:12.858 11:22:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:12.858 11:22:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:12.858 11:22:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:12.858 11:22:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:12.858 11:22:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:12.858 11:22:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:12.858 11:22:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:12.858 11:22:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.858 11:22:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:35:12.858 11:22:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7cecbd6-b22b-4df0-b78c-1e81c1921ea4 00:35:12.858 11:22:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.858 11:22:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.858 11:22:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:12.858 11:22:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.858 11:22:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:12.858 11:22:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.858 11:22:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.858 11:22:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.858 11:22:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.858 11:22:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.858 11:22:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.858 11:22:41 -- paths/export.sh@5 -- # export PATH 00:35:12.858 11:22:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.858 11:22:41 -- nvmf/common.sh@47 -- # : 0 00:35:12.858 11:22:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:12.858 11:22:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:12.858 11:22:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.858 11:22:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.858 11:22:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.858 11:22:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:12.858 11:22:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:12.858 11:22:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:12.858 11:22:41 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:12.858 11:22:41 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:12.858 11:22:41 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:12.858 11:22:41 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:12.858 11:22:41 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:12.858 11:22:41 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:12.858 11:22:41 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:12.858 11:22:41 -- keyring/common.sh@15 -- # local name key digest path 00:35:12.858 11:22:41 -- keyring/common.sh@17 -- # name=key0 00:35:12.858 11:22:41 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:12.858 11:22:41 -- keyring/common.sh@17 -- # digest=0 00:35:12.858 11:22:41 -- keyring/common.sh@18 -- # mktemp 00:35:12.858 11:22:41 -- keyring/common.sh@18 -- # path=/tmp/tmp.HlchjlzcPY 00:35:12.858 11:22:41 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:12.858 11:22:41 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:12.858 11:22:41 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:12.858 11:22:41 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:12.858 11:22:41 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:35:12.858 11:22:41 -- nvmf/common.sh@693 -- # digest=0 00:35:12.858 11:22:41 -- nvmf/common.sh@694 -- # python - 00:35:13.117 11:22:41 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HlchjlzcPY 00:35:13.117 11:22:41 -- keyring/common.sh@23 -- # echo /tmp/tmp.HlchjlzcPY 00:35:13.117 11:22:41 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HlchjlzcPY 00:35:13.117 11:22:41 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:13.117 11:22:41 -- keyring/common.sh@15 -- # local name key digest path 00:35:13.117 11:22:41 -- keyring/common.sh@17 -- # name=key1 00:35:13.117 11:22:41 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:13.117 11:22:41 -- keyring/common.sh@17 -- # digest=0 00:35:13.117 11:22:41 -- keyring/common.sh@18 -- # mktemp 00:35:13.117 11:22:41 -- keyring/common.sh@18 -- # path=/tmp/tmp.CgNCSpyRoI 00:35:13.117 11:22:41 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:13.117 11:22:41 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:13.117 11:22:41 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:13.117 11:22:41 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:13.117 11:22:41 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:35:13.117 11:22:41 -- nvmf/common.sh@693 -- # digest=0 00:35:13.117 11:22:41 -- nvmf/common.sh@694 -- # python - 00:35:13.117 11:22:41 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CgNCSpyRoI 00:35:13.117 11:22:41 -- keyring/common.sh@23 -- # echo /tmp/tmp.CgNCSpyRoI 00:35:13.117 11:22:41 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.CgNCSpyRoI 00:35:13.117 11:22:41 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:13.117 11:22:41 -- keyring/file.sh@30 -- # tgtpid=111500 00:35:13.117 11:22:41 -- keyring/file.sh@32 -- # waitforlisten 111500 00:35:13.117 11:22:41 -- common/autotest_common.sh@817 -- # '[' -z 111500 ']' 00:35:13.117 11:22:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.117 11:22:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:13.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.117 11:22:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.117 11:22:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:13.117 11:22:41 -- common/autotest_common.sh@10 -- # set +x 00:35:13.117 [2024-04-18 11:22:41.655135] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:35:13.117 [2024-04-18 11:22:41.655247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111500 ] 00:35:13.447 [2024-04-18 11:22:41.796760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.447 [2024-04-18 11:22:41.889214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.017 11:22:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:14.017 11:22:42 -- common/autotest_common.sh@850 -- # return 0 00:35:14.017 11:22:42 -- keyring/file.sh@33 -- # rpc_cmd 00:35:14.017 11:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.017 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:35:14.017 [2024-04-18 11:22:42.610237] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.017 null0 00:35:14.017 [2024-04-18 11:22:42.642216] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:14.017 [2024-04-18 11:22:42.642441] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:14.017 [2024-04-18 11:22:42.650238] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:14.017 11:22:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.017 11:22:42 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:14.017 11:22:42 -- common/autotest_common.sh@638 -- # local es=0 00:35:14.017 11:22:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:14.017 11:22:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:35:14.276 11:22:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:14.276 11:22:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:35:14.276 11:22:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:14.276 11:22:42 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:14.276 11:22:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.276 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:35:14.276 [2024-04-18 11:22:42.662221] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.2024/04/18 11:22:42 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:35:14.276 request: 00:35:14.276 { 00:35:14.276 "method": "nvmf_subsystem_add_listener", 00:35:14.276 "params": { 00:35:14.276 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.276 "secure_channel": false, 00:35:14.276 "listen_address": { 00:35:14.276 "trtype": "tcp", 00:35:14.276 "traddr": "127.0.0.1", 00:35:14.276 "trsvcid": "4420" 00:35:14.276 } 00:35:14.276 } 00:35:14.276 } 00:35:14.276 Got JSON-RPC error response 00:35:14.276 GoRPCClient: error on JSON-RPC call 00:35:14.276 11:22:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:35:14.276 11:22:42 -- common/autotest_common.sh@641 -- # es=1 00:35:14.276 11:22:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:14.276 11:22:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:14.276 11:22:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:14.276 11:22:42 -- keyring/file.sh@46 -- # bperfpid=111535 00:35:14.276 11:22:42 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:14.276 11:22:42 -- keyring/file.sh@48 -- # waitforlisten 111535 /var/tmp/bperf.sock 00:35:14.276 11:22:42 -- common/autotest_common.sh@817 -- # '[' -z 111535 ']' 00:35:14.276 11:22:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.276 11:22:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:14.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.276 11:22:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.277 11:22:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:14.277 11:22:42 -- common/autotest_common.sh@10 -- # set +x 00:35:14.277 [2024-04-18 11:22:42.724778] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:35:14.277 [2024-04-18 11:22:42.724878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111535 ] 00:35:14.277 [2024-04-18 11:22:42.866390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.535 [2024-04-18 11:22:42.958272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.102 11:22:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:15.102 11:22:43 -- common/autotest_common.sh@850 -- # return 0 00:35:15.102 11:22:43 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:15.102 11:22:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:15.360 11:22:43 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CgNCSpyRoI 00:35:15.360 11:22:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CgNCSpyRoI 00:35:15.619 11:22:44 -- keyring/file.sh@51 -- # get_key key0 00:35:15.619 11:22:44 -- keyring/file.sh@51 -- # jq -r .path 00:35:15.619 11:22:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.619 11:22:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.619 11:22:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:15.877 11:22:44 -- keyring/file.sh@51 -- # [[ /tmp/tmp.HlchjlzcPY == \/\t\m\p\/\t\m\p\.\H\l\c\h\j\l\z\c\P\Y ]] 00:35:15.877 11:22:44 -- keyring/file.sh@52 -- # get_key key1 00:35:15.877 11:22:44 -- keyring/file.sh@52 -- # jq -r .path 00:35:15.877 11:22:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.877 11:22:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.877 11:22:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:16.135 11:22:44 -- keyring/file.sh@52 -- # [[ /tmp/tmp.CgNCSpyRoI == \/\t\m\p\/\t\m\p\.\C\g\N\C\S\p\y\R\o\I ]] 00:35:16.135 11:22:44 -- keyring/file.sh@53 -- # get_refcnt key0 00:35:16.135 11:22:44 -- keyring/common.sh@12 -- # get_key key0 00:35:16.135 11:22:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.135 11:22:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.135 11:22:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.135 11:22:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.393 11:22:44 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:16.393 11:22:44 -- keyring/file.sh@54 -- # get_refcnt key1 00:35:16.393 11:22:44 -- keyring/common.sh@12 -- # get_key key1 00:35:16.393 11:22:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.393 11:22:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.393 11:22:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.393 11:22:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:16.651 11:22:45 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:16.651 11:22:45 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.651 11:22:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:16.909 [2024-04-18 11:22:45.319014] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:16.909 nvme0n1 00:35:16.909 11:22:45 -- keyring/file.sh@59 -- # get_refcnt key0 00:35:16.909 11:22:45 -- keyring/common.sh@12 -- # get_key key0 00:35:16.909 11:22:45 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.909 11:22:45 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.909 11:22:45 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.909 11:22:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.167 11:22:45 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:17.167 11:22:45 -- keyring/file.sh@60 -- # get_refcnt key1 00:35:17.167 11:22:45 -- keyring/common.sh@12 -- # get_key key1 00:35:17.167 11:22:45 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:17.167 11:22:45 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:17.167 11:22:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.167 11:22:45 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:17.425 11:22:45 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:17.425 11:22:46 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.683 Running I/O for 1 seconds... 00:35:18.645 00:35:18.645 Latency(us) 00:35:18.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.645 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:18.645 nvme0n1 : 1.01 11555.46 45.14 0.00 0.00 11042.15 3798.11 18469.24 00:35:18.645 =================================================================================================================== 00:35:18.645 Total : 11555.46 45.14 0.00 0.00 11042.15 3798.11 18469.24 00:35:18.645 0 00:35:18.645 11:22:47 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:18.645 11:22:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:18.903 11:22:47 -- keyring/file.sh@65 -- # get_refcnt key0 00:35:18.903 11:22:47 -- keyring/common.sh@12 -- # get_key key0 00:35:18.903 11:22:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.903 11:22:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.903 11:22:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.903 11:22:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.161 11:22:47 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:19.161 11:22:47 -- keyring/file.sh@66 -- # get_refcnt key1 00:35:19.161 11:22:47 -- keyring/common.sh@12 -- # get_key key1 00:35:19.161 11:22:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.161 11:22:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.161 11:22:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.161 11:22:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:19.420 11:22:47 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:19.420 11:22:47 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.420 11:22:47 -- common/autotest_common.sh@638 -- # local es=0 00:35:19.420 11:22:47 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.420 11:22:47 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:19.420 11:22:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:19.420 11:22:47 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:19.420 11:22:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:19.420 11:22:47 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.420 11:22:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.679 [2024-04-18 11:22:48.207521] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:19.679 [2024-04-18 11:22:48.208057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c825b0 (107): Transport endpoint is not connected 00:35:19.679 [2024-04-18 11:22:48.209031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c825b0 (9): Bad file descriptor 00:35:19.679 [2024-04-18 11:22:48.210028] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:19.679 [2024-04-18 11:22:48.210055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:19.679 [2024-04-18 11:22:48.210066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:19.679 2024/04/18 11:22:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:35:19.679 request: 00:35:19.679 { 00:35:19.679 "method": "bdev_nvme_attach_controller", 00:35:19.679 "params": { 00:35:19.679 "name": "nvme0", 00:35:19.679 "trtype": "tcp", 00:35:19.679 "traddr": "127.0.0.1", 00:35:19.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.679 "adrfam": "ipv4", 00:35:19.679 "trsvcid": "4420", 00:35:19.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.679 "psk": "key1" 00:35:19.679 } 00:35:19.679 } 00:35:19.679 Got JSON-RPC error response 00:35:19.679 GoRPCClient: error on JSON-RPC call 00:35:19.679 11:22:48 -- common/autotest_common.sh@641 -- # es=1 00:35:19.679 11:22:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:19.679 11:22:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:19.679 11:22:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:19.679 11:22:48 -- keyring/file.sh@71 -- # get_refcnt key0 00:35:19.679 11:22:48 -- keyring/common.sh@12 -- # get_key key0 00:35:19.679 11:22:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.679 11:22:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.679 11:22:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.679 11:22:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.937 11:22:48 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:19.937 11:22:48 -- keyring/file.sh@72 -- # get_refcnt key1 00:35:19.937 11:22:48 -- keyring/common.sh@12 -- # get_key key1 00:35:19.937 11:22:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.937 11:22:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:19.937 11:22:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.937 11:22:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.514 11:22:48 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:20.514 11:22:48 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:20.514 11:22:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:20.514 11:22:49 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:20.514 11:22:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:20.772 11:22:49 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:20.772 11:22:49 -- keyring/file.sh@77 -- # jq length 00:35:20.772 11:22:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.030 11:22:49 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:21.030 11:22:49 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.HlchjlzcPY 00:35:21.030 11:22:49 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:21.030 11:22:49 -- common/autotest_common.sh@638 -- # local es=0 00:35:21.030 11:22:49 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:21.030 11:22:49 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:21.030 11:22:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:21.030 11:22:49 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:21.030 11:22:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:21.030 11:22:49 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:21.030 11:22:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:21.288 [2024-04-18 11:22:49.875473] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HlchjlzcPY': 0100660 00:35:21.288 [2024-04-18 11:22:49.875547] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:21.288 2024/04/18 11:22:49 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.HlchjlzcPY], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:35:21.288 request: 00:35:21.288 { 00:35:21.288 "method": "keyring_file_add_key", 00:35:21.288 "params": { 00:35:21.288 "name": "key0", 00:35:21.288 "path": "/tmp/tmp.HlchjlzcPY" 00:35:21.288 } 00:35:21.288 } 00:35:21.288 Got JSON-RPC error response 00:35:21.288 GoRPCClient: error on JSON-RPC call 00:35:21.288 11:22:49 -- common/autotest_common.sh@641 -- # es=1 00:35:21.288 11:22:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:21.288 11:22:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:21.288 11:22:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:21.288 11:22:49 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.HlchjlzcPY 00:35:21.288 11:22:49 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:21.288 11:22:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HlchjlzcPY 00:35:21.546 11:22:50 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.HlchjlzcPY 00:35:21.546 11:22:50 -- keyring/file.sh@88 -- # get_refcnt key0 00:35:21.546 11:22:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.546 11:22:50 -- keyring/common.sh@12 -- # get_key key0 00:35:21.546 11:22:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.546 11:22:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.546 11:22:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.805 11:22:50 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:21.805 11:22:50 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.805 11:22:50 -- common/autotest_common.sh@638 -- # local es=0 00:35:21.805 11:22:50 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.805 11:22:50 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:21.805 11:22:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:21.805 11:22:50 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:21.805 11:22:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:21.805 11:22:50 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.805 11:22:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:22.063 [2024-04-18 11:22:50.623723] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HlchjlzcPY': No such file or directory 00:35:22.063 [2024-04-18 11:22:50.623783] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:22.063 [2024-04-18 11:22:50.623824] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:22.063 [2024-04-18 11:22:50.623832] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:22.063 [2024-04-18 11:22:50.623841] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:22.063 2024/04/18 11:22:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:35:22.063 request: 00:35:22.063 { 00:35:22.063 "method": "bdev_nvme_attach_controller", 00:35:22.063 "params": { 00:35:22.063 "name": "nvme0", 00:35:22.063 "trtype": "tcp", 00:35:22.063 "traddr": "127.0.0.1", 00:35:22.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.063 "adrfam": "ipv4", 00:35:22.063 "trsvcid": "4420", 00:35:22.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.063 "psk": "key0" 00:35:22.063 } 00:35:22.063 } 00:35:22.063 Got JSON-RPC error response 00:35:22.063 GoRPCClient: error on JSON-RPC call 00:35:22.063 11:22:50 -- common/autotest_common.sh@641 -- # es=1 00:35:22.063 11:22:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:22.063 11:22:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:22.063 11:22:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:22.063 11:22:50 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:22.063 11:22:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:22.321 11:22:50 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:22.321 11:22:50 -- keyring/common.sh@15 -- # local name key digest path 00:35:22.321 11:22:50 -- keyring/common.sh@17 -- # name=key0 00:35:22.321 11:22:50 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:22.321 11:22:50 -- keyring/common.sh@17 -- # digest=0 00:35:22.321 11:22:50 -- keyring/common.sh@18 -- # mktemp 00:35:22.321 11:22:50 -- keyring/common.sh@18 -- # path=/tmp/tmp.aBrmLwVQhg 00:35:22.321 11:22:50 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:22.321 11:22:50 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:22.321 11:22:50 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:22.321 11:22:50 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:22.321 11:22:50 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:35:22.321 11:22:50 -- nvmf/common.sh@693 -- # digest=0 00:35:22.321 11:22:50 -- nvmf/common.sh@694 -- # python - 00:35:22.321 11:22:50 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aBrmLwVQhg 00:35:22.321 11:22:50 -- keyring/common.sh@23 -- # echo /tmp/tmp.aBrmLwVQhg 00:35:22.321 11:22:50 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.aBrmLwVQhg 00:35:22.321 11:22:50 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aBrmLwVQhg 00:35:22.321 11:22:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aBrmLwVQhg 00:35:22.579 11:22:51 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:22.579 11:22:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:23.145 nvme0n1 00:35:23.145 11:22:51 -- keyring/file.sh@99 -- # get_refcnt key0 00:35:23.145 11:22:51 -- keyring/common.sh@12 -- # get_key key0 00:35:23.145 11:22:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:23.145 11:22:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:23.145 11:22:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.145 11:22:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:23.403 11:22:51 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:23.403 11:22:51 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:23.403 11:22:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:23.403 11:22:52 -- keyring/file.sh@101 -- # get_key key0 00:35:23.403 11:22:52 -- keyring/file.sh@101 -- # jq -r .removed 00:35:23.403 11:22:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:23.403 11:22:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:23.403 11:22:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.660 11:22:52 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:23.660 11:22:52 -- keyring/file.sh@102 -- # get_refcnt key0 00:35:23.660 11:22:52 -- keyring/common.sh@12 -- # get_key key0 00:35:23.660 11:22:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:23.660 11:22:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:23.660 11:22:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.660 11:22:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:24.225 11:22:52 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:24.225 11:22:52 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:24.225 11:22:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:24.225 11:22:52 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:24.225 11:22:52 -- keyring/file.sh@104 -- # jq length 00:35:24.225 11:22:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:24.483 11:22:53 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:24.483 11:22:53 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aBrmLwVQhg 00:35:24.483 11:22:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aBrmLwVQhg 00:35:24.742 11:22:53 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CgNCSpyRoI 00:35:24.742 11:22:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CgNCSpyRoI 00:35:24.999 11:22:53 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:24.999 11:22:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:25.566 nvme0n1 00:35:25.566 11:22:54 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:25.566 11:22:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:25.824 11:22:54 -- keyring/file.sh@112 -- # config='{ 00:35:25.824 "subsystems": [ 00:35:25.824 { 00:35:25.824 "subsystem": "keyring", 00:35:25.824 "config": [ 00:35:25.824 { 00:35:25.824 "method": "keyring_file_add_key", 00:35:25.824 "params": { 00:35:25.824 "name": "key0", 00:35:25.824 "path": "/tmp/tmp.aBrmLwVQhg" 00:35:25.824 } 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "method": "keyring_file_add_key", 00:35:25.824 "params": { 00:35:25.824 "name": "key1", 00:35:25.824 "path": "/tmp/tmp.CgNCSpyRoI" 00:35:25.824 } 00:35:25.824 } 00:35:25.824 ] 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "subsystem": "iobuf", 00:35:25.824 "config": [ 00:35:25.824 { 00:35:25.824 "method": "iobuf_set_options", 00:35:25.824 "params": { 00:35:25.824 "large_bufsize": 135168, 00:35:25.824 "large_pool_count": 1024, 00:35:25.824 "small_bufsize": 8192, 00:35:25.824 "small_pool_count": 8192 00:35:25.824 } 00:35:25.824 } 00:35:25.824 ] 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "subsystem": "sock", 00:35:25.824 "config": [ 00:35:25.824 { 00:35:25.824 "method": "sock_impl_set_options", 00:35:25.824 "params": { 00:35:25.824 "enable_ktls": false, 00:35:25.824 "enable_placement_id": 0, 00:35:25.824 "enable_quickack": false, 00:35:25.824 "enable_recv_pipe": true, 00:35:25.824 "enable_zerocopy_send_client": false, 00:35:25.824 "enable_zerocopy_send_server": true, 00:35:25.824 "impl_name": "posix", 00:35:25.824 "recv_buf_size": 2097152, 00:35:25.824 "send_buf_size": 2097152, 00:35:25.824 "tls_version": 0, 00:35:25.824 "zerocopy_threshold": 0 00:35:25.824 } 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "method": "sock_impl_set_options", 00:35:25.824 "params": { 00:35:25.824 "enable_ktls": false, 00:35:25.824 "enable_placement_id": 0, 00:35:25.824 "enable_quickack": false, 00:35:25.824 "enable_recv_pipe": true, 00:35:25.824 "enable_zerocopy_send_client": false, 00:35:25.824 "enable_zerocopy_send_server": true, 00:35:25.824 "impl_name": "ssl", 00:35:25.824 "recv_buf_size": 4096, 00:35:25.824 "send_buf_size": 4096, 00:35:25.824 "tls_version": 0, 00:35:25.824 "zerocopy_threshold": 0 00:35:25.824 } 00:35:25.824 } 00:35:25.824 ] 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "subsystem": "vmd", 00:35:25.824 "config": [] 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "subsystem": "accel", 00:35:25.824 "config": [ 00:35:25.824 { 00:35:25.824 "method": "accel_set_options", 00:35:25.824 "params": { 00:35:25.824 "buf_count": 2048, 00:35:25.824 "large_cache_size": 16, 00:35:25.824 "sequence_count": 2048, 00:35:25.824 "small_cache_size": 128, 00:35:25.824 "task_count": 2048 00:35:25.824 } 00:35:25.824 } 00:35:25.824 ] 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "subsystem": "bdev", 00:35:25.824 "config": [ 00:35:25.824 { 00:35:25.824 "method": "bdev_set_options", 00:35:25.824 "params": { 00:35:25.824 "bdev_auto_examine": true, 00:35:25.824 "bdev_io_cache_size": 256, 00:35:25.824 "bdev_io_pool_size": 65535, 00:35:25.824 "iobuf_large_cache_size": 16, 00:35:25.824 "iobuf_small_cache_size": 128 00:35:25.824 } 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "method": "bdev_raid_set_options", 00:35:25.824 "params": { 00:35:25.824 "process_window_size_kb": 1024 00:35:25.824 } 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "method": "bdev_iscsi_set_options", 00:35:25.824 "params": { 00:35:25.824 "timeout_sec": 30 00:35:25.824 } 00:35:25.824 }, 00:35:25.824 { 00:35:25.824 "method": "bdev_nvme_set_options", 00:35:25.824 "params": { 00:35:25.825 "action_on_timeout": "none", 00:35:25.825 "allow_accel_sequence": false, 00:35:25.825 "arbitration_burst": 0, 00:35:25.825 "bdev_retry_count": 3, 00:35:25.825 "ctrlr_loss_timeout_sec": 0, 00:35:25.825 "delay_cmd_submit": true, 00:35:25.825 "dhchap_dhgroups": [ 00:35:25.825 "null", 00:35:25.825 "ffdhe2048", 00:35:25.825 "ffdhe3072", 00:35:25.825 "ffdhe4096", 00:35:25.825 "ffdhe6144", 00:35:25.825 "ffdhe8192" 00:35:25.825 ], 00:35:25.825 "dhchap_digests": [ 00:35:25.825 "sha256", 00:35:25.825 "sha384", 00:35:25.825 "sha512" 00:35:25.825 ], 00:35:25.825 "disable_auto_failback": false, 00:35:25.825 "fast_io_fail_timeout_sec": 0, 00:35:25.825 "generate_uuids": false, 00:35:25.825 "high_priority_weight": 0, 00:35:25.825 "io_path_stat": false, 00:35:25.825 "io_queue_requests": 512, 00:35:25.825 "keep_alive_timeout_ms": 10000, 00:35:25.825 "low_priority_weight": 0, 00:35:25.825 "medium_priority_weight": 0, 00:35:25.825 "nvme_adminq_poll_period_us": 10000, 00:35:25.825 "nvme_error_stat": false, 00:35:25.825 "nvme_ioq_poll_period_us": 0, 00:35:25.825 "rdma_cm_event_timeout_ms": 0, 00:35:25.825 "rdma_max_cq_size": 0, 00:35:25.825 "rdma_srq_size": 0, 00:35:25.825 "reconnect_delay_sec": 0, 00:35:25.825 "timeout_admin_us": 0, 00:35:25.825 "timeout_us": 0, 00:35:25.825 "transport_ack_timeout": 0, 00:35:25.825 "transport_retry_count": 4, 00:35:25.825 "transport_tos": 0 00:35:25.825 } 00:35:25.825 }, 00:35:25.825 { 00:35:25.825 "method": "bdev_nvme_attach_controller", 00:35:25.825 "params": { 00:35:25.825 "adrfam": "IPv4", 00:35:25.825 "ctrlr_loss_timeout_sec": 0, 00:35:25.825 "ddgst": false, 00:35:25.825 "fast_io_fail_timeout_sec": 0, 00:35:25.825 "hdgst": false, 00:35:25.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.825 "name": "nvme0", 00:35:25.825 "prchk_guard": false, 00:35:25.825 "prchk_reftag": false, 00:35:25.825 "psk": "key0", 00:35:25.825 "reconnect_delay_sec": 0, 00:35:25.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.825 "traddr": "127.0.0.1", 00:35:25.825 "trsvcid": "4420", 00:35:25.825 "trtype": "TCP" 00:35:25.825 } 00:35:25.825 }, 00:35:25.825 { 00:35:25.825 "method": "bdev_nvme_set_hotplug", 00:35:25.825 "params": { 00:35:25.825 "enable": false, 00:35:25.825 "period_us": 100000 00:35:25.825 } 00:35:25.825 }, 00:35:25.825 { 00:35:25.825 "method": "bdev_wait_for_examine" 00:35:25.825 } 00:35:25.825 ] 00:35:25.825 }, 00:35:25.825 { 00:35:25.825 "subsystem": "nbd", 00:35:25.825 "config": [] 00:35:25.825 } 00:35:25.825 ] 00:35:25.825 }' 00:35:25.825 11:22:54 -- keyring/file.sh@114 -- # killprocess 111535 00:35:25.825 11:22:54 -- common/autotest_common.sh@936 -- # '[' -z 111535 ']' 00:35:25.825 11:22:54 -- common/autotest_common.sh@940 -- # kill -0 111535 00:35:25.825 11:22:54 -- common/autotest_common.sh@941 -- # uname 00:35:25.825 11:22:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:25.825 11:22:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111535 00:35:25.825 killing process with pid 111535 00:35:25.825 Received shutdown signal, test time was about 1.000000 seconds 00:35:25.825 00:35:25.825 Latency(us) 00:35:25.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.825 =================================================================================================================== 00:35:25.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.825 11:22:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:35:25.825 11:22:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:35:25.825 11:22:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111535' 00:35:25.825 11:22:54 -- common/autotest_common.sh@955 -- # kill 111535 00:35:25.825 11:22:54 -- common/autotest_common.sh@960 -- # wait 111535 00:35:26.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:26.083 11:22:54 -- keyring/file.sh@117 -- # bperfpid=112001 00:35:26.083 11:22:54 -- keyring/file.sh@119 -- # waitforlisten 112001 /var/tmp/bperf.sock 00:35:26.083 11:22:54 -- common/autotest_common.sh@817 -- # '[' -z 112001 ']' 00:35:26.083 11:22:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:26.083 11:22:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:26.083 11:22:54 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:26.083 11:22:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:26.083 11:22:54 -- keyring/file.sh@115 -- # echo '{ 00:35:26.083 "subsystems": [ 00:35:26.083 { 00:35:26.083 "subsystem": "keyring", 00:35:26.083 "config": [ 00:35:26.083 { 00:35:26.083 "method": "keyring_file_add_key", 00:35:26.083 "params": { 00:35:26.083 "name": "key0", 00:35:26.083 "path": "/tmp/tmp.aBrmLwVQhg" 00:35:26.083 } 00:35:26.083 }, 00:35:26.083 { 00:35:26.083 "method": "keyring_file_add_key", 00:35:26.083 "params": { 00:35:26.083 "name": "key1", 00:35:26.083 "path": "/tmp/tmp.CgNCSpyRoI" 00:35:26.083 } 00:35:26.083 } 00:35:26.083 ] 00:35:26.083 }, 00:35:26.083 { 00:35:26.083 "subsystem": "iobuf", 00:35:26.083 "config": [ 00:35:26.083 { 00:35:26.083 "method": "iobuf_set_options", 00:35:26.083 "params": { 00:35:26.083 "large_bufsize": 135168, 00:35:26.083 "large_pool_count": 1024, 00:35:26.083 "small_bufsize": 8192, 00:35:26.083 "small_pool_count": 8192 00:35:26.083 } 00:35:26.083 } 00:35:26.083 ] 00:35:26.083 }, 00:35:26.083 { 00:35:26.083 "subsystem": "sock", 00:35:26.083 "config": [ 00:35:26.083 { 00:35:26.083 "method": "sock_impl_set_options", 00:35:26.083 "params": { 00:35:26.083 "enable_ktls": false, 00:35:26.083 "enable_placement_id": 0, 00:35:26.083 "enable_quickack": false, 00:35:26.083 "enable_recv_pipe": true, 00:35:26.083 "enable_zerocopy_send_client": false, 00:35:26.083 "enable_zerocopy_send_server": true, 00:35:26.083 "impl_name": "posix", 00:35:26.083 "recv_buf_size": 2097152, 00:35:26.083 "send_buf_size": 2097152, 00:35:26.083 "tls_version": 0, 00:35:26.083 "zerocopy_threshold": 0 00:35:26.083 } 00:35:26.083 }, 00:35:26.083 { 00:35:26.083 "method": "sock_impl_set_options", 00:35:26.083 "params": { 00:35:26.083 "enable_ktls": false, 00:35:26.083 "enable_placement_id": 0, 00:35:26.083 "enable_quickack": false, 00:35:26.083 "enable_recv_pipe": true, 00:35:26.084 "enable_zerocopy_send_client": false, 00:35:26.084 "enable_zerocopy_send_server": true, 00:35:26.084 "impl_name": "ssl", 00:35:26.084 "recv_buf_size": 4096, 00:35:26.084 "send_buf_size": 4096, 00:35:26.084 "tls_version": 0, 00:35:26.084 "zerocopy_threshold": 0 00:35:26.084 } 00:35:26.084 } 00:35:26.084 ] 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "subsystem": "vmd", 00:35:26.084 "config": [] 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "subsystem": "accel", 00:35:26.084 "config": [ 00:35:26.084 { 00:35:26.084 "method": "accel_set_options", 00:35:26.084 "params": { 00:35:26.084 "buf_count": 2048, 00:35:26.084 "large_cache_size": 16, 00:35:26.084 "sequence_count": 2048, 00:35:26.084 "small_cache_size": 128, 00:35:26.084 "task_count": 2048 00:35:26.084 } 00:35:26.084 } 00:35:26.084 ] 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "subsystem": "bdev", 00:35:26.084 "config": [ 00:35:26.084 { 00:35:26.084 "method": "bdev_set_options", 00:35:26.084 "params": { 00:35:26.084 "bdev_auto_examine": true, 00:35:26.084 "bdev_io_cache_size": 256, 00:35:26.084 "bdev_io_pool_size": 65535, 00:35:26.084 "iobuf_large_cache_size": 16, 00:35:26.084 "iobuf_small_cache_size": 128 00:35:26.084 } 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "method": "bdev_raid_set_options", 00:35:26.084 "params": { 00:35:26.084 "process_window_size_kb": 1024 00:35:26.084 } 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "method": "bdev_iscsi_set_options", 00:35:26.084 "params": { 00:35:26.084 "timeout_sec": 30 00:35:26.084 } 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "method": "bdev_nvme_set_options", 00:35:26.084 "params": { 00:35:26.084 "action_on_timeout": "none", 00:35:26.084 "allow_accel_sequence": false, 00:35:26.084 "arbitration_burst": 0, 00:35:26.084 "bdev_retry_count": 3, 00:35:26.084 "ctrlr_loss_timeout_sec": 0, 00:35:26.084 "delay_cmd_submit": true, 00:35:26.084 "dhchap_dhgroups": [ 00:35:26.084 "null", 00:35:26.084 "ffdhe2048", 00:35:26.084 "ffdhe3072", 00:35:26.084 "ffdhe4096", 00:35:26.084 "ffdhe6144", 00:35:26.084 "ffdhe8192" 00:35:26.084 ], 00:35:26.084 "dhchap_digests": [ 00:35:26.084 "sha256", 00:35:26.084 "sha384", 00:35:26.084 "sha512" 00:35:26.084 ], 00:35:26.084 "disable_auto_failback": false, 00:35:26.084 "fast_io_fail_timeout_sec": 0, 00:35:26.084 "generate_uuids": false, 00:35:26.084 "high_priority_weight": 0, 00:35:26.084 "io_path_stat": false, 00:35:26.084 "io_queue_requests": 512, 00:35:26.084 "keep_alive_timeout_ms": 10000, 00:35:26.084 "low_priority_weight": 0, 00:35:26.084 "medium_priority_weight": 0, 00:35:26.084 "nvme_adminq_poll_period_us": 10000, 00:35:26.084 "nvme_error_stat": false, 00:35:26.084 "nvme_ioq_poll_period_us": 0, 00:35:26.084 "rdma_cm_event_timeout_ms": 0, 00:35:26.084 "rdma_max_cq_size": 0, 00:35:26.084 "rdma_srq_size": 0, 00:35:26.084 "reconnect_delay_sec": 0, 00:35:26.084 "timeout_admin_us": 0, 00:35:26.084 "timeout_us": 0, 00:35:26.084 "transport_ack_timeout": 0, 00:35:26.084 "transport_retry_count": 4, 00:35:26.084 "transport_tos": 0 00:35:26.084 } 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "method": "bdev_nvme_attach_controller", 00:35:26.084 "params": { 00:35:26.084 "adrfam": "IPv4", 00:35:26.084 "ctrlr_loss_timeout_sec": 0, 00:35:26.084 "ddgst": false, 00:35:26.084 "fast_io_fail_timeout_sec": 0, 00:35:26.084 "hdgst": false, 00:35:26.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.084 "name": "nvme0", 00:35:26.084 "prchk_guard": false, 00:35:26.084 "prchk_reftag": false, 00:35:26.084 "psk": "key0", 00:35:26.084 "reconnect_delay_sec": 0, 00:35:26.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.084 "traddr": "127.0.0.1", 00:35:26.084 "trsvcid": "4420", 00:35:26.084 "trtype": "TCP" 00:35:26.084 } 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "method": "bdev_nvme_set_hotplug", 00:35:26.084 "params": { 00:35:26.084 "enable": false, 00:35:26.084 "period_us": 100000 00:35:26.084 } 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "method": "bdev_wait_for_examine" 00:35:26.084 } 00:35:26.084 ] 00:35:26.084 }, 00:35:26.084 { 00:35:26.084 "subsystem": "nbd", 00:35:26.084 "config": [] 00:35:26.084 } 00:35:26.084 ] 00:35:26.084 }' 00:35:26.084 11:22:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:26.084 11:22:54 -- common/autotest_common.sh@10 -- # set +x 00:35:26.084 [2024-04-18 11:22:54.643363] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:35:26.084 [2024-04-18 11:22:54.643460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112001 ] 00:35:26.343 [2024-04-18 11:22:54.784538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.343 [2024-04-18 11:22:54.879140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.601 [2024-04-18 11:22:55.051346] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:27.168 11:22:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:27.168 11:22:55 -- common/autotest_common.sh@850 -- # return 0 00:35:27.168 11:22:55 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:27.168 11:22:55 -- keyring/file.sh@120 -- # jq length 00:35:27.168 11:22:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.427 11:22:55 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:27.427 11:22:55 -- keyring/file.sh@121 -- # get_refcnt key0 00:35:27.427 11:22:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:27.427 11:22:55 -- keyring/common.sh@12 -- # get_key key0 00:35:27.427 11:22:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:27.427 11:22:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.427 11:22:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:27.790 11:22:56 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:27.790 11:22:56 -- keyring/file.sh@122 -- # get_refcnt key1 00:35:27.790 11:22:56 -- keyring/common.sh@12 -- # get_key key1 00:35:27.790 11:22:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:27.790 11:22:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:27.790 11:22:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.790 11:22:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:28.050 11:22:56 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:28.050 11:22:56 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:28.050 11:22:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:28.050 11:22:56 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:28.311 11:22:56 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:28.311 11:22:56 -- keyring/file.sh@1 -- # cleanup 00:35:28.311 11:22:56 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.aBrmLwVQhg /tmp/tmp.CgNCSpyRoI 00:35:28.311 11:22:56 -- keyring/file.sh@20 -- # killprocess 112001 00:35:28.311 11:22:56 -- common/autotest_common.sh@936 -- # '[' -z 112001 ']' 00:35:28.311 11:22:56 -- common/autotest_common.sh@940 -- # kill -0 112001 00:35:28.311 11:22:56 -- common/autotest_common.sh@941 -- # uname 00:35:28.311 11:22:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:28.311 11:22:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112001 00:35:28.311 killing process with pid 112001 00:35:28.311 Received shutdown signal, test time was about 1.000000 seconds 00:35:28.311 00:35:28.311 Latency(us) 00:35:28.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.311 =================================================================================================================== 00:35:28.311 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:28.311 11:22:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:35:28.311 11:22:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:35:28.311 11:22:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112001' 00:35:28.311 11:22:56 -- common/autotest_common.sh@955 -- # kill 112001 00:35:28.311 11:22:56 -- common/autotest_common.sh@960 -- # wait 112001 00:35:28.311 11:22:56 -- keyring/file.sh@21 -- # killprocess 111500 00:35:28.311 11:22:56 -- common/autotest_common.sh@936 -- # '[' -z 111500 ']' 00:35:28.311 11:22:56 -- common/autotest_common.sh@940 -- # kill -0 111500 00:35:28.311 11:22:56 -- common/autotest_common.sh@941 -- # uname 00:35:28.311 11:22:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:28.311 11:22:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111500 00:35:28.311 killing process with pid 111500 00:35:28.311 11:22:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:28.311 11:22:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:28.311 11:22:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111500' 00:35:28.311 11:22:56 -- common/autotest_common.sh@955 -- # kill 111500 00:35:28.311 [2024-04-18 11:22:56.945285] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:28.311 11:22:56 -- common/autotest_common.sh@960 -- # wait 111500 00:35:28.878 00:35:28.878 real 0m15.962s 00:35:28.878 user 0m39.636s 00:35:28.878 sys 0m3.340s 00:35:28.878 11:22:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:28.878 11:22:57 -- common/autotest_common.sh@10 -- # set +x 00:35:28.878 ************************************ 00:35:28.878 END TEST keyring_file 00:35:28.878 ************************************ 00:35:28.878 11:22:57 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:35:28.878 11:22:57 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:35:28.878 11:22:57 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:35:28.878 11:22:57 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:35:28.878 11:22:57 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:35:28.878 11:22:57 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:35:28.878 11:22:57 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:35:28.878 11:22:57 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:35:28.878 11:22:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:28.878 11:22:57 -- common/autotest_common.sh@10 -- # set +x 00:35:28.878 11:22:57 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:35:28.879 11:22:57 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:35:28.879 11:22:57 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:35:28.879 11:22:57 -- common/autotest_common.sh@10 -- # set +x 00:35:30.780 INFO: APP EXITING 00:35:30.780 INFO: killing all VMs 00:35:30.780 INFO: killing vhost app 00:35:30.780 INFO: EXIT DONE 00:35:31.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:31.037 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:31.037 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:31.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:31.971 Cleaning 00:35:31.971 Removing: /var/run/dpdk/spdk0/config 00:35:31.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:31.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:31.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:31.971 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:31.971 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:31.971 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:31.971 Removing: /var/run/dpdk/spdk1/config 00:35:31.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:31.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:31.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:31.971 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:31.971 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:31.971 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:31.971 Removing: /var/run/dpdk/spdk2/config 00:35:31.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:31.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:31.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:31.971 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:31.971 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:31.971 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:31.971 Removing: /var/run/dpdk/spdk3/config 00:35:31.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:31.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:31.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:31.971 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:31.971 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:31.971 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:31.971 Removing: /var/run/dpdk/spdk4/config 00:35:31.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:31.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:31.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:31.971 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:31.972 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:31.972 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:31.972 Removing: /dev/shm/nvmf_trace.0 00:35:31.972 Removing: /dev/shm/spdk_tgt_trace.pid73135 00:35:31.972 Removing: /var/run/dpdk/spdk0 00:35:31.972 Removing: /var/run/dpdk/spdk1 00:35:31.972 Removing: /var/run/dpdk/spdk2 00:35:31.972 Removing: /var/run/dpdk/spdk3 00:35:31.972 Removing: /var/run/dpdk/spdk4 00:35:31.972 Removing: /var/run/dpdk/spdk_pid100164 00:35:31.972 Removing: /var/run/dpdk/spdk_pid100208 00:35:31.972 Removing: /var/run/dpdk/spdk_pid100287 00:35:31.972 Removing: /var/run/dpdk/spdk_pid100337 00:35:31.972 Removing: /var/run/dpdk/spdk_pid100679 00:35:31.972 Removing: /var/run/dpdk/spdk_pid100930 00:35:31.972 Removing: /var/run/dpdk/spdk_pid101432 00:35:31.972 Removing: /var/run/dpdk/spdk_pid101965 00:35:31.972 Removing: /var/run/dpdk/spdk_pid102562 00:35:31.972 Removing: /var/run/dpdk/spdk_pid102564 00:35:31.972 Removing: /var/run/dpdk/spdk_pid104535 00:35:31.972 Removing: /var/run/dpdk/spdk_pid104624 00:35:31.972 Removing: /var/run/dpdk/spdk_pid104716 00:35:31.972 Removing: /var/run/dpdk/spdk_pid104801 00:35:31.972 Removing: /var/run/dpdk/spdk_pid104968 00:35:31.972 Removing: /var/run/dpdk/spdk_pid105039 00:35:31.972 Removing: /var/run/dpdk/spdk_pid105125 00:35:31.972 Removing: /var/run/dpdk/spdk_pid105203 00:35:31.972 Removing: /var/run/dpdk/spdk_pid105552 00:35:31.972 Removing: /var/run/dpdk/spdk_pid106244 00:35:31.972 Removing: /var/run/dpdk/spdk_pid107599 00:35:31.972 Removing: /var/run/dpdk/spdk_pid107800 00:35:31.972 Removing: /var/run/dpdk/spdk_pid108087 00:35:31.972 Removing: /var/run/dpdk/spdk_pid108387 00:35:31.972 Removing: /var/run/dpdk/spdk_pid108948 00:35:31.972 Removing: /var/run/dpdk/spdk_pid108954 00:35:31.972 Removing: /var/run/dpdk/spdk_pid109321 00:35:31.972 Removing: /var/run/dpdk/spdk_pid109480 00:35:31.972 Removing: /var/run/dpdk/spdk_pid109642 00:35:31.972 Removing: /var/run/dpdk/spdk_pid109734 00:35:31.972 Removing: /var/run/dpdk/spdk_pid109890 00:35:31.972 Removing: /var/run/dpdk/spdk_pid109999 00:35:31.972 Removing: /var/run/dpdk/spdk_pid110678 00:35:31.972 Removing: /var/run/dpdk/spdk_pid110719 00:35:31.972 Removing: /var/run/dpdk/spdk_pid110749 00:35:31.972 Removing: /var/run/dpdk/spdk_pid111006 00:35:31.972 Removing: /var/run/dpdk/spdk_pid111041 00:35:31.972 Removing: /var/run/dpdk/spdk_pid111072 00:35:31.972 Removing: /var/run/dpdk/spdk_pid111500 00:35:31.972 Removing: /var/run/dpdk/spdk_pid111535 00:35:31.972 Removing: /var/run/dpdk/spdk_pid112001 00:35:31.972 Removing: /var/run/dpdk/spdk_pid72973 00:35:31.972 Removing: /var/run/dpdk/spdk_pid73135 00:35:31.972 Removing: /var/run/dpdk/spdk_pid73433 00:35:31.972 Removing: /var/run/dpdk/spdk_pid73530 00:35:31.972 Removing: /var/run/dpdk/spdk_pid73564 00:35:31.972 Removing: /var/run/dpdk/spdk_pid73687 00:35:31.972 Removing: /var/run/dpdk/spdk_pid73717 00:35:31.972 Removing: /var/run/dpdk/spdk_pid73847 00:35:31.972 Removing: /var/run/dpdk/spdk_pid74127 00:35:31.972 Removing: /var/run/dpdk/spdk_pid74309 00:35:31.972 Removing: /var/run/dpdk/spdk_pid74391 00:35:31.972 Removing: /var/run/dpdk/spdk_pid74489 00:35:31.972 Removing: /var/run/dpdk/spdk_pid74588 00:35:31.972 Removing: /var/run/dpdk/spdk_pid74631 00:35:32.231 Removing: /var/run/dpdk/spdk_pid74670 00:35:32.231 Removing: /var/run/dpdk/spdk_pid74737 00:35:32.231 Removing: /var/run/dpdk/spdk_pid74870 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75511 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75579 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75653 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75681 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75764 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75792 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75876 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75904 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75965 00:35:32.231 Removing: /var/run/dpdk/spdk_pid75995 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76045 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76075 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76236 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76276 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76356 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76434 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76463 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76539 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76579 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76617 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76656 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76695 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76734 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76775 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76811 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76855 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76890 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76934 00:35:32.231 Removing: /var/run/dpdk/spdk_pid76967 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77011 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77044 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77088 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77128 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77166 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77208 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77249 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77289 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77335 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77405 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77525 00:35:32.231 Removing: /var/run/dpdk/spdk_pid77957 00:35:32.231 Removing: /var/run/dpdk/spdk_pid84756 00:35:32.231 Removing: /var/run/dpdk/spdk_pid85093 00:35:32.231 Removing: /var/run/dpdk/spdk_pid86306 00:35:32.231 Removing: /var/run/dpdk/spdk_pid86696 00:35:32.231 Removing: /var/run/dpdk/spdk_pid86967 00:35:32.231 Removing: /var/run/dpdk/spdk_pid87014 00:35:32.231 Removing: /var/run/dpdk/spdk_pid87903 00:35:32.231 Removing: /var/run/dpdk/spdk_pid87953 00:35:32.231 Removing: /var/run/dpdk/spdk_pid88354 00:35:32.231 Removing: /var/run/dpdk/spdk_pid88886 00:35:32.231 Removing: /var/run/dpdk/spdk_pid89324 00:35:32.231 Removing: /var/run/dpdk/spdk_pid90297 00:35:32.231 Removing: /var/run/dpdk/spdk_pid91283 00:35:32.231 Removing: /var/run/dpdk/spdk_pid91394 00:35:32.231 Removing: /var/run/dpdk/spdk_pid91462 00:35:32.231 Removing: /var/run/dpdk/spdk_pid92924 00:35:32.231 Removing: /var/run/dpdk/spdk_pid93161 00:35:32.231 Removing: /var/run/dpdk/spdk_pid93607 00:35:32.231 Removing: /var/run/dpdk/spdk_pid93712 00:35:32.231 Removing: /var/run/dpdk/spdk_pid93857 00:35:32.231 Removing: /var/run/dpdk/spdk_pid93908 00:35:32.231 Removing: /var/run/dpdk/spdk_pid93948 00:35:32.231 Removing: /var/run/dpdk/spdk_pid93994 00:35:32.231 Removing: /var/run/dpdk/spdk_pid94152 00:35:32.231 Removing: /var/run/dpdk/spdk_pid94305 00:35:32.231 Removing: /var/run/dpdk/spdk_pid94556 00:35:32.231 Removing: /var/run/dpdk/spdk_pid94679 00:35:32.231 Removing: /var/run/dpdk/spdk_pid94928 00:35:32.231 Removing: /var/run/dpdk/spdk_pid95053 00:35:32.231 Removing: /var/run/dpdk/spdk_pid95188 00:35:32.231 Removing: /var/run/dpdk/spdk_pid95529 00:35:32.231 Removing: /var/run/dpdk/spdk_pid95916 00:35:32.231 Removing: /var/run/dpdk/spdk_pid95918 00:35:32.231 Removing: /var/run/dpdk/spdk_pid98157 00:35:32.231 Removing: /var/run/dpdk/spdk_pid98458 00:35:32.231 Removing: /var/run/dpdk/spdk_pid98963 00:35:32.231 Removing: /var/run/dpdk/spdk_pid98969 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99303 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99317 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99331 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99362 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99370 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99517 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99519 00:35:32.231 Removing: /var/run/dpdk/spdk_pid99626 00:35:32.491 Removing: /var/run/dpdk/spdk_pid99629 00:35:32.491 Removing: /var/run/dpdk/spdk_pid99738 00:35:32.491 Removing: /var/run/dpdk/spdk_pid99740 00:35:32.491 Clean 00:35:32.491 11:23:01 -- common/autotest_common.sh@1437 -- # return 0 00:35:32.491 11:23:01 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:35:32.491 11:23:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:32.491 11:23:01 -- common/autotest_common.sh@10 -- # set +x 00:35:32.491 11:23:01 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:35:32.491 11:23:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:32.491 11:23:01 -- common/autotest_common.sh@10 -- # set +x 00:35:32.491 11:23:01 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:32.491 11:23:01 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:32.491 11:23:01 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:32.491 11:23:01 -- spdk/autotest.sh@389 -- # hash lcov 00:35:32.491 11:23:01 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:32.491 11:23:01 -- spdk/autotest.sh@391 -- # hostname 00:35:32.491 11:23:01 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:32.750 geninfo: WARNING: invalid characters removed from testname! 00:35:54.699 11:23:22 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:57.984 11:23:26 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:00.517 11:23:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:03.071 11:23:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:05.604 11:23:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:08.138 11:23:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:10.670 11:23:38 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:10.670 11:23:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:10.670 11:23:38 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:10.670 11:23:38 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.670 11:23:38 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.670 11:23:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.670 11:23:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.670 11:23:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.670 11:23:38 -- paths/export.sh@5 -- $ export PATH 00:36:10.670 11:23:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.670 11:23:38 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:36:10.670 11:23:38 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:10.670 11:23:38 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713439418.XXXXXX 00:36:10.670 11:23:38 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713439418.Y7IZF4 00:36:10.670 11:23:38 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:10.670 11:23:38 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:36:10.670 11:23:38 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:36:10.670 11:23:38 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:36:10.670 11:23:38 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:36:10.670 11:23:38 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:36:10.670 11:23:38 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:10.670 11:23:38 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:36:10.670 11:23:38 -- common/autotest_common.sh@10 -- $ set +x 00:36:10.670 11:23:38 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:36:10.670 11:23:38 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:36:10.670 11:23:38 -- pm/common@17 -- $ local monitor 00:36:10.670 11:23:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:10.670 11:23:38 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=113674 00:36:10.670 11:23:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:10.670 11:23:38 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=113676 00:36:10.670 11:23:38 -- pm/common@21 -- $ date +%s 00:36:10.670 11:23:38 -- pm/common@26 -- $ sleep 1 00:36:10.670 11:23:38 -- pm/common@21 -- $ date +%s 00:36:10.670 11:23:38 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713439418 00:36:10.670 11:23:38 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713439418 00:36:10.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713439418_collect-vmstat.pm.log 00:36:10.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713439418_collect-cpu-load.pm.log 00:36:11.237 11:23:39 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:36:11.237 11:23:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:36:11.237 11:23:39 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:36:11.237 11:23:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:11.237 11:23:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:11.237 11:23:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:11.237 11:23:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:11.237 11:23:39 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:11.237 11:23:39 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:11.237 11:23:39 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:11.495 11:23:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:11.495 11:23:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:11.495 11:23:39 -- pm/common@30 -- $ signal_monitor_resources TERM 00:36:11.495 11:23:39 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:36:11.495 11:23:39 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:11.495 11:23:39 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:36:11.495 11:23:39 -- pm/common@45 -- $ pid=113681 00:36:11.495 11:23:39 -- pm/common@52 -- $ sudo kill -TERM 113681 00:36:11.495 11:23:39 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:11.495 11:23:39 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:36:11.495 11:23:39 -- pm/common@45 -- $ pid=113682 00:36:11.495 11:23:39 -- pm/common@52 -- $ sudo kill -TERM 113682 00:36:11.495 + [[ -n 5999 ]] 00:36:11.495 + sudo kill 5999 00:36:11.504 [Pipeline] } 00:36:11.524 [Pipeline] // timeout 00:36:11.530 [Pipeline] } 00:36:11.549 [Pipeline] // stage 00:36:11.554 [Pipeline] } 00:36:11.573 [Pipeline] // catchError 00:36:11.583 [Pipeline] stage 00:36:11.585 [Pipeline] { (Stop VM) 00:36:11.600 [Pipeline] sh 00:36:11.879 + vagrant halt 00:36:15.165 ==> default: Halting domain... 00:36:21.743 [Pipeline] sh 00:36:22.023 + vagrant destroy -f 00:36:26.286 ==> default: Removing domain... 00:36:26.300 [Pipeline] sh 00:36:26.578 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:36:26.586 [Pipeline] } 00:36:26.604 [Pipeline] // stage 00:36:26.610 [Pipeline] } 00:36:26.625 [Pipeline] // dir 00:36:26.631 [Pipeline] } 00:36:26.645 [Pipeline] // wrap 00:36:26.652 [Pipeline] } 00:36:26.670 [Pipeline] // catchError 00:36:26.678 [Pipeline] stage 00:36:26.679 [Pipeline] { (Epilogue) 00:36:26.693 [Pipeline] sh 00:36:27.008 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:33.599 [Pipeline] catchError 00:36:33.600 [Pipeline] { 00:36:33.609 [Pipeline] sh 00:36:33.881 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:33.881 Artifacts sizes are good 00:36:33.891 [Pipeline] } 00:36:33.904 [Pipeline] // catchError 00:36:33.914 [Pipeline] archiveArtifacts 00:36:33.920 Archiving artifacts 00:36:34.062 [Pipeline] cleanWs 00:36:34.072 [WS-CLEANUP] Deleting project workspace... 00:36:34.072 [WS-CLEANUP] Deferred wipeout is used... 00:36:34.078 [WS-CLEANUP] done 00:36:34.080 [Pipeline] } 00:36:34.097 [Pipeline] // stage 00:36:34.101 [Pipeline] } 00:36:34.115 [Pipeline] // node 00:36:34.120 [Pipeline] End of Pipeline 00:36:34.159 Finished: SUCCESS